00:00:00.000 Started by upstream project "autotest-per-patch" build number 122824 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.033 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.033 The recommended git tool is: git 00:00:00.033 using credential 00000000-0000-0000-0000-000000000002 00:00:00.035 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.066 Fetching changes from the remote Git repository 00:00:00.068 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.096 Using shallow fetch with depth 1 00:00:00.096 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.096 > git --version # timeout=10 00:00:00.121 > git --version # 'git version 2.39.2' 00:00:00.121 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.122 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.122 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.594 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.605 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.616 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:03.616 > git config core.sparsecheckout # timeout=10 00:00:03.627 > git read-tree -mu HEAD # timeout=10 00:00:03.643 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:03.663 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:03.663 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:03.752 [Pipeline] Start of Pipeline 00:00:03.764 [Pipeline] library 00:00:03.765 Loading library shm_lib@master 00:00:03.765 Library shm_lib@master is cached. Copying from home. 00:00:03.779 [Pipeline] node 00:00:03.786 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.787 [Pipeline] { 00:00:03.797 [Pipeline] catchError 00:00:03.798 [Pipeline] { 00:00:03.813 [Pipeline] wrap 00:00:03.823 [Pipeline] { 00:00:03.830 [Pipeline] stage 00:00:03.831 [Pipeline] { (Prologue) 00:00:04.005 [Pipeline] sh 00:00:04.286 + logger -p user.info -t JENKINS-CI 00:00:04.300 [Pipeline] echo 00:00:04.301 Node: WFP22 00:00:04.308 [Pipeline] sh 00:00:04.607 [Pipeline] setCustomBuildProperty 00:00:04.618 [Pipeline] echo 00:00:04.619 Cleanup processes 00:00:04.625 [Pipeline] sh 00:00:04.914 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.914 3799214 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.925 [Pipeline] sh 00:00:05.203 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.203 ++ grep -v 'sudo pgrep' 00:00:05.203 ++ awk '{print $1}' 00:00:05.203 + sudo kill -9 00:00:05.203 + true 00:00:05.217 [Pipeline] cleanWs 00:00:05.227 [WS-CLEANUP] Deleting project workspace... 00:00:05.227 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.233 [WS-CLEANUP] done 00:00:05.238 [Pipeline] setCustomBuildProperty 00:00:05.252 [Pipeline] sh 00:00:05.538 + sudo git config --global --replace-all safe.directory '*' 00:00:05.602 [Pipeline] nodesByLabel 00:00:05.603 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.610 [Pipeline] httpRequest 00:00:05.614 HttpMethod: GET 00:00:05.614 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.620 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.633 Response Code: HTTP/1.1 200 OK 00:00:05.633 Success: Status code 200 is in the accepted range: 200,404 00:00:05.634 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.819 [Pipeline] sh 00:00:09.101 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:09.121 [Pipeline] httpRequest 00:00:09.125 HttpMethod: GET 00:00:09.126 URL: http://10.211.164.101/packages/spdk_aa13730dbbb63b460d387a1fd9561480a8aceb80.tar.gz 00:00:09.126 Sending request to url: http://10.211.164.101/packages/spdk_aa13730dbbb63b460d387a1fd9561480a8aceb80.tar.gz 00:00:09.138 Response Code: HTTP/1.1 200 OK 00:00:09.139 Success: Status code 200 is in the accepted range: 200,404 00:00:09.140 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_aa13730dbbb63b460d387a1fd9561480a8aceb80.tar.gz 00:00:41.586 [Pipeline] sh 00:00:41.869 + tar --no-same-owner -xf spdk_aa13730dbbb63b460d387a1fd9561480a8aceb80.tar.gz 00:00:44.417 [Pipeline] sh 00:00:44.699 + git -C spdk log --oneline -n5 00:00:44.699 aa13730db nvmf: set controller's DH-HMAC-CHAP key 00:00:44.699 012e50f8d nvmf: rework rpc_nvmf_subsystem_add_host()'s error handling 00:00:44.699 e8841656d nvmf: add nvmf_host_free() 00:00:44.699 e53d15a2a nvmf/tcp: flush sockets when removing from a sock group 00:00:44.699 5b83ef1c4 nvmf/auth: Diffie-Hellman exchange support 00:00:44.711 [Pipeline] } 00:00:44.729 [Pipeline] // stage 00:00:44.738 [Pipeline] stage 00:00:44.740 [Pipeline] { (Prepare) 00:00:44.760 [Pipeline] writeFile 00:00:44.777 [Pipeline] sh 00:00:45.060 + logger -p user.info -t JENKINS-CI 00:00:45.074 [Pipeline] sh 00:00:45.357 + logger -p user.info -t JENKINS-CI 00:00:45.373 [Pipeline] sh 00:00:45.656 + cat autorun-spdk.conf 00:00:45.656 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.656 SPDK_TEST_NVMF=1 00:00:45.656 SPDK_TEST_NVME_CLI=1 00:00:45.656 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.656 SPDK_TEST_NVMF_NICS=e810 00:00:45.656 SPDK_TEST_VFIOUSER=1 00:00:45.656 SPDK_RUN_UBSAN=1 00:00:45.656 NET_TYPE=phy 00:00:45.663 RUN_NIGHTLY=0 00:00:45.668 [Pipeline] readFile 00:00:45.694 [Pipeline] withEnv 00:00:45.696 [Pipeline] { 00:00:45.710 [Pipeline] sh 00:00:45.993 + set -ex 00:00:45.993 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:45.993 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:45.993 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.993 ++ SPDK_TEST_NVMF=1 00:00:45.993 ++ SPDK_TEST_NVME_CLI=1 00:00:45.993 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:45.993 ++ SPDK_TEST_NVMF_NICS=e810 00:00:45.993 ++ SPDK_TEST_VFIOUSER=1 00:00:45.993 ++ SPDK_RUN_UBSAN=1 00:00:45.993 ++ NET_TYPE=phy 00:00:45.993 ++ RUN_NIGHTLY=0 00:00:45.993 + case $SPDK_TEST_NVMF_NICS in 00:00:45.993 + DRIVERS=ice 00:00:45.993 + [[ tcp == \r\d\m\a ]] 00:00:45.993 + [[ -n ice ]] 00:00:45.993 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.993 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:45.993 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:45.993 rmmod: ERROR: Module irdma is not currently loaded 00:00:45.993 rmmod: ERROR: Module i40iw is not currently loaded 00:00:45.993 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:45.993 + true 00:00:45.993 + for D in $DRIVERS 00:00:45.993 + sudo modprobe ice 00:00:45.993 + exit 0 00:00:46.003 [Pipeline] } 00:00:46.021 [Pipeline] // withEnv 00:00:46.027 [Pipeline] } 00:00:46.043 [Pipeline] // stage 00:00:46.053 [Pipeline] catchError 00:00:46.055 [Pipeline] { 00:00:46.071 [Pipeline] timeout 00:00:46.071 Timeout set to expire in 40 min 00:00:46.073 [Pipeline] { 00:00:46.090 [Pipeline] stage 00:00:46.092 [Pipeline] { (Tests) 00:00:46.112 [Pipeline] sh 00:00:46.399 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.399 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.399 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.399 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:46.399 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:46.399 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.399 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:46.399 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.399 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:46.399 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:46.399 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:46.399 + source /etc/os-release 00:00:46.399 ++ NAME='Fedora Linux' 00:00:46.399 ++ VERSION='38 (Cloud Edition)' 00:00:46.399 ++ ID=fedora 00:00:46.399 ++ VERSION_ID=38 00:00:46.399 ++ VERSION_CODENAME= 00:00:46.399 ++ PLATFORM_ID=platform:f38 00:00:46.399 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:46.399 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:46.399 ++ LOGO=fedora-logo-icon 00:00:46.399 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:46.399 ++ HOME_URL=https://fedoraproject.org/ 00:00:46.399 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:46.399 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:46.399 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:46.399 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:46.399 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:46.399 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:46.399 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:46.399 ++ SUPPORT_END=2024-05-14 00:00:46.399 ++ VARIANT='Cloud Edition' 00:00:46.399 ++ VARIANT_ID=cloud 00:00:46.399 + uname -a 00:00:46.399 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:46.399 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:49.686 Hugepages 00:00:49.686 node hugesize free / total 00:00:49.686 node0 1048576kB 0 / 0 00:00:49.686 node0 2048kB 0 / 0 00:00:49.686 node1 1048576kB 0 / 0 00:00:49.686 node1 2048kB 0 / 0 00:00:49.686 00:00:49.686 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:49.686 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:49.686 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:49.686 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:49.686 + rm -f /tmp/spdk-ld-path 00:00:49.686 + source autorun-spdk.conf 00:00:49.686 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.686 ++ SPDK_TEST_NVMF=1 00:00:49.686 ++ SPDK_TEST_NVME_CLI=1 00:00:49.686 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.686 ++ SPDK_TEST_NVMF_NICS=e810 00:00:49.686 ++ SPDK_TEST_VFIOUSER=1 00:00:49.686 ++ SPDK_RUN_UBSAN=1 00:00:49.686 ++ NET_TYPE=phy 00:00:49.686 ++ RUN_NIGHTLY=0 00:00:49.686 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:49.686 + [[ -n '' ]] 00:00:49.686 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:49.686 + for M in /var/spdk/build-*-manifest.txt 00:00:49.686 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:49.686 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:49.686 + for M in /var/spdk/build-*-manifest.txt 00:00:49.686 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:49.686 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:49.686 ++ uname 00:00:49.686 + [[ Linux == \L\i\n\u\x ]] 00:00:49.686 + sudo dmesg -T 00:00:49.686 + sudo dmesg --clear 00:00:49.686 + dmesg_pid=3800676 00:00:49.686 + [[ Fedora Linux == FreeBSD ]] 00:00:49.686 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.686 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:49.686 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:49.686 + [[ -x /usr/src/fio-static/fio ]] 00:00:49.686 + export FIO_BIN=/usr/src/fio-static/fio 00:00:49.686 + FIO_BIN=/usr/src/fio-static/fio 00:00:49.686 + sudo dmesg -Tw 00:00:49.686 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:49.686 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:49.686 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:49.686 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.686 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:49.686 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:49.686 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.686 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:49.686 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:49.686 Test configuration: 00:00:49.686 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.686 SPDK_TEST_NVMF=1 00:00:49.686 SPDK_TEST_NVME_CLI=1 00:00:49.686 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:49.686 SPDK_TEST_NVMF_NICS=e810 00:00:49.686 SPDK_TEST_VFIOUSER=1 00:00:49.686 SPDK_RUN_UBSAN=1 00:00:49.686 NET_TYPE=phy 00:00:49.686 RUN_NIGHTLY=0 01:03:25 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:49.686 01:03:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:49.686 01:03:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:49.686 01:03:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:49.686 01:03:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.686 01:03:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.686 01:03:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.686 01:03:25 -- paths/export.sh@5 -- $ export PATH 00:00:49.686 01:03:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:49.686 01:03:25 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:49.686 01:03:25 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:49.686 01:03:25 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715727805.XXXXXX 00:00:49.686 01:03:25 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715727805.QIPFoP 00:00:49.686 01:03:25 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:49.686 01:03:25 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:49.686 01:03:25 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:49.686 01:03:25 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:49.686 01:03:25 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:49.686 01:03:25 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:49.686 01:03:25 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:49.686 01:03:25 -- common/autotest_common.sh@10 -- $ set +x 00:00:49.686 01:03:25 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:49.686 01:03:25 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:49.686 01:03:25 -- pm/common@17 -- $ local monitor 00:00:49.686 01:03:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.686 01:03:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.686 01:03:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.686 01:03:25 -- pm/common@21 -- $ date +%s 00:00:49.686 01:03:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:49.686 01:03:25 -- pm/common@21 -- $ date +%s 00:00:49.686 01:03:25 -- pm/common@25 -- $ sleep 1 00:00:49.686 01:03:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715727805 00:00:49.686 01:03:25 -- pm/common@21 -- $ date +%s 00:00:49.687 01:03:25 -- pm/common@21 -- $ date +%s 00:00:49.687 01:03:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715727805 00:00:49.687 01:03:25 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715727805 00:00:49.687 01:03:25 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715727805 00:00:49.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715727805_collect-cpu-load.pm.log 00:00:49.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715727805_collect-vmstat.pm.log 00:00:49.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715727805_collect-cpu-temp.pm.log 00:00:49.946 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715727805_collect-bmc-pm.bmc.pm.log 00:00:50.882 01:03:26 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:50.882 01:03:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:50.882 01:03:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:50.882 01:03:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.882 01:03:26 -- spdk/autobuild.sh@16 -- $ date -u 00:00:50.882 Tue May 14 11:03:26 PM UTC 2024 00:00:50.882 01:03:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:50.882 v24.05-pre-641-gaa13730db 00:00:50.882 01:03:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:50.882 01:03:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:50.882 01:03:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:50.882 01:03:26 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:50.882 01:03:26 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:50.882 01:03:26 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.882 ************************************ 00:00:50.882 START TEST ubsan 00:00:50.882 ************************************ 00:00:50.882 01:03:26 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:50.882 using ubsan 00:00:50.882 00:00:50.882 real 0m0.000s 00:00:50.882 user 0m0.000s 00:00:50.882 sys 0m0.000s 00:00:50.882 01:03:26 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:50.882 01:03:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:50.882 ************************************ 00:00:50.882 END TEST ubsan 00:00:50.882 ************************************ 00:00:50.882 01:03:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:50.882 01:03:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:50.882 01:03:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:50.882 01:03:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:50.882 01:03:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:50.882 01:03:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:50.882 01:03:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:50.882 01:03:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:50.882 01:03:26 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:51.140 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:51.140 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:51.399 Using 'verbs' RDMA provider 00:01:04.543 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:19.429 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:19.429 Creating mk/config.mk...done. 00:01:19.429 Creating mk/cc.flags.mk...done. 00:01:19.429 Type 'make' to build. 00:01:19.429 01:03:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:19.429 01:03:53 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:19.429 01:03:53 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:19.429 01:03:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:19.429 ************************************ 00:01:19.429 START TEST make 00:01:19.429 ************************************ 00:01:19.429 01:03:53 make -- common/autotest_common.sh@1121 -- $ make -j112 00:01:19.429 make[1]: Nothing to be done for 'all'. 00:01:19.687 The Meson build system 00:01:19.687 Version: 1.3.1 00:01:19.687 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:19.687 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:19.687 Build type: native build 00:01:19.687 Project name: libvfio-user 00:01:19.687 Project version: 0.0.1 00:01:19.687 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:19.687 C linker for the host machine: cc ld.bfd 2.39-16 00:01:19.687 Host machine cpu family: x86_64 00:01:19.687 Host machine cpu: x86_64 00:01:19.687 Run-time dependency threads found: YES 00:01:19.687 Library dl found: YES 00:01:19.687 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:19.687 Run-time dependency json-c found: YES 0.17 00:01:19.687 Run-time dependency cmocka found: YES 1.1.7 00:01:19.687 Program pytest-3 found: NO 00:01:19.687 Program flake8 found: NO 00:01:19.688 Program misspell-fixer found: NO 00:01:19.688 Program restructuredtext-lint found: NO 00:01:19.688 Program valgrind found: YES (/usr/bin/valgrind) 00:01:19.688 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:19.688 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:19.688 Compiler for C supports arguments -Wwrite-strings: YES 00:01:19.688 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:19.688 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:19.688 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:19.688 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:19.688 Build targets in project: 8 00:01:19.688 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:19.688 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:19.688 00:01:19.688 libvfio-user 0.0.1 00:01:19.688 00:01:19.688 User defined options 00:01:19.688 buildtype : debug 00:01:19.688 default_library: shared 00:01:19.688 libdir : /usr/local/lib 00:01:19.688 00:01:19.688 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:20.253 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:20.253 [1/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:20.253 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:20.253 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:20.253 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:20.253 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:20.253 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:20.253 [7/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:20.253 [8/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:20.253 [9/37] Compiling C object samples/null.p/null.c.o 00:01:20.253 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:20.253 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:20.253 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:20.253 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:20.253 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:20.253 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:20.253 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:20.253 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:20.253 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:20.253 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:20.253 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:20.253 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:20.253 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:20.253 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:20.253 [24/37] Compiling C object samples/server.p/server.c.o 00:01:20.253 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:20.512 [26/37] Compiling C object samples/client.p/client.c.o 00:01:20.512 [27/37] Linking target samples/client 00:01:20.512 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:20.512 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:20.512 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:20.512 [31/37] Linking target test/unit_tests 00:01:20.770 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:20.770 [33/37] Linking target samples/gpio-pci-idio-16 00:01:20.770 [34/37] Linking target samples/lspci 00:01:20.770 [35/37] Linking target samples/null 00:01:20.770 [36/37] Linking target samples/server 00:01:20.770 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:20.770 INFO: autodetecting backend as ninja 00:01:20.770 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:20.770 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:21.028 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:21.028 ninja: no work to do. 00:01:26.301 The Meson build system 00:01:26.301 Version: 1.3.1 00:01:26.301 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:26.301 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:26.301 Build type: native build 00:01:26.301 Program cat found: YES (/usr/bin/cat) 00:01:26.301 Project name: DPDK 00:01:26.301 Project version: 23.11.0 00:01:26.301 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:26.301 C linker for the host machine: cc ld.bfd 2.39-16 00:01:26.301 Host machine cpu family: x86_64 00:01:26.301 Host machine cpu: x86_64 00:01:26.301 Message: ## Building in Developer Mode ## 00:01:26.301 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:26.301 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:26.301 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:26.301 Program python3 found: YES (/usr/bin/python3) 00:01:26.301 Program cat found: YES (/usr/bin/cat) 00:01:26.301 Compiler for C supports arguments -march=native: YES 00:01:26.301 Checking for size of "void *" : 8 00:01:26.301 Checking for size of "void *" : 8 (cached) 00:01:26.301 Library m found: YES 00:01:26.301 Library numa found: YES 00:01:26.301 Has header "numaif.h" : YES 00:01:26.301 Library fdt found: NO 00:01:26.301 Library execinfo found: NO 00:01:26.301 Has header "execinfo.h" : YES 00:01:26.301 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:26.301 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:26.301 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:26.301 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:26.301 Run-time dependency openssl found: YES 3.0.9 00:01:26.301 Run-time dependency libpcap found: YES 1.10.4 00:01:26.301 Has header "pcap.h" with dependency libpcap: YES 00:01:26.301 Compiler for C supports arguments -Wcast-qual: YES 00:01:26.301 Compiler for C supports arguments -Wdeprecated: YES 00:01:26.301 Compiler for C supports arguments -Wformat: YES 00:01:26.301 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:26.301 Compiler for C supports arguments -Wformat-security: NO 00:01:26.301 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:26.301 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:26.301 Compiler for C supports arguments -Wnested-externs: YES 00:01:26.301 Compiler for C supports arguments -Wold-style-definition: YES 00:01:26.301 Compiler for C supports arguments -Wpointer-arith: YES 00:01:26.301 Compiler for C supports arguments -Wsign-compare: YES 00:01:26.301 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:26.301 Compiler for C supports arguments -Wundef: YES 00:01:26.301 Compiler for C supports arguments -Wwrite-strings: YES 00:01:26.301 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:26.301 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:26.301 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:26.301 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:26.301 Program objdump found: YES (/usr/bin/objdump) 00:01:26.302 Compiler for C supports arguments -mavx512f: YES 00:01:26.302 Checking if "AVX512 checking" compiles: YES 00:01:26.302 Fetching value of define "__SSE4_2__" : 1 00:01:26.302 Fetching value of define "__AES__" : 1 00:01:26.302 Fetching value of define "__AVX__" : 1 00:01:26.302 Fetching value of define "__AVX2__" : 1 00:01:26.302 Fetching value of define "__AVX512BW__" : 1 00:01:26.302 Fetching value of define "__AVX512CD__" : 1 00:01:26.302 Fetching value of define "__AVX512DQ__" : 1 00:01:26.302 Fetching value of define "__AVX512F__" : 1 00:01:26.302 Fetching value of define "__AVX512VL__" : 1 00:01:26.302 Fetching value of define "__PCLMUL__" : 1 00:01:26.302 Fetching value of define "__RDRND__" : 1 00:01:26.302 Fetching value of define "__RDSEED__" : 1 00:01:26.302 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:26.302 Fetching value of define "__znver1__" : (undefined) 00:01:26.302 Fetching value of define "__znver2__" : (undefined) 00:01:26.302 Fetching value of define "__znver3__" : (undefined) 00:01:26.302 Fetching value of define "__znver4__" : (undefined) 00:01:26.302 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:26.302 Message: lib/log: Defining dependency "log" 00:01:26.302 Message: lib/kvargs: Defining dependency "kvargs" 00:01:26.302 Message: lib/telemetry: Defining dependency "telemetry" 00:01:26.302 Checking for function "getentropy" : NO 00:01:26.302 Message: lib/eal: Defining dependency "eal" 00:01:26.302 Message: lib/ring: Defining dependency "ring" 00:01:26.302 Message: lib/rcu: Defining dependency "rcu" 00:01:26.302 Message: lib/mempool: Defining dependency "mempool" 00:01:26.302 Message: lib/mbuf: Defining dependency "mbuf" 00:01:26.302 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:26.302 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:26.302 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:26.302 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:26.302 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:26.302 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:26.302 Compiler for C supports arguments -mpclmul: YES 00:01:26.302 Compiler for C supports arguments -maes: YES 00:01:26.302 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:26.302 Compiler for C supports arguments -mavx512bw: YES 00:01:26.302 Compiler for C supports arguments -mavx512dq: YES 00:01:26.302 Compiler for C supports arguments -mavx512vl: YES 00:01:26.302 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:26.302 Compiler for C supports arguments -mavx2: YES 00:01:26.302 Compiler for C supports arguments -mavx: YES 00:01:26.302 Message: lib/net: Defining dependency "net" 00:01:26.302 Message: lib/meter: Defining dependency "meter" 00:01:26.302 Message: lib/ethdev: Defining dependency "ethdev" 00:01:26.302 Message: lib/pci: Defining dependency "pci" 00:01:26.302 Message: lib/cmdline: Defining dependency "cmdline" 00:01:26.302 Message: lib/hash: Defining dependency "hash" 00:01:26.302 Message: lib/timer: Defining dependency "timer" 00:01:26.302 Message: lib/compressdev: Defining dependency "compressdev" 00:01:26.302 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:26.302 Message: lib/dmadev: Defining dependency "dmadev" 00:01:26.302 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:26.302 Message: lib/power: Defining dependency "power" 00:01:26.302 Message: lib/reorder: Defining dependency "reorder" 00:01:26.302 Message: lib/security: Defining dependency "security" 00:01:26.302 Has header "linux/userfaultfd.h" : YES 00:01:26.302 Has header "linux/vduse.h" : YES 00:01:26.302 Message: lib/vhost: Defining dependency "vhost" 00:01:26.302 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:26.302 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:26.302 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:26.302 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:26.302 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:26.302 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:26.302 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:26.302 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:26.302 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:26.302 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:26.302 Program doxygen found: YES (/usr/bin/doxygen) 00:01:26.302 Configuring doxy-api-html.conf using configuration 00:01:26.302 Configuring doxy-api-man.conf using configuration 00:01:26.302 Program mandb found: YES (/usr/bin/mandb) 00:01:26.302 Program sphinx-build found: NO 00:01:26.302 Configuring rte_build_config.h using configuration 00:01:26.302 Message: 00:01:26.302 ================= 00:01:26.302 Applications Enabled 00:01:26.302 ================= 00:01:26.302 00:01:26.302 apps: 00:01:26.302 00:01:26.302 00:01:26.302 Message: 00:01:26.302 ================= 00:01:26.302 Libraries Enabled 00:01:26.302 ================= 00:01:26.302 00:01:26.302 libs: 00:01:26.302 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:26.302 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:26.302 cryptodev, dmadev, power, reorder, security, vhost, 00:01:26.302 00:01:26.302 Message: 00:01:26.302 =============== 00:01:26.302 Drivers Enabled 00:01:26.302 =============== 00:01:26.302 00:01:26.302 common: 00:01:26.302 00:01:26.302 bus: 00:01:26.302 pci, vdev, 00:01:26.302 mempool: 00:01:26.302 ring, 00:01:26.302 dma: 00:01:26.302 00:01:26.302 net: 00:01:26.302 00:01:26.302 crypto: 00:01:26.302 00:01:26.302 compress: 00:01:26.302 00:01:26.302 vdpa: 00:01:26.302 00:01:26.302 00:01:26.302 Message: 00:01:26.302 ================= 00:01:26.302 Content Skipped 00:01:26.302 ================= 00:01:26.302 00:01:26.302 apps: 00:01:26.302 dumpcap: explicitly disabled via build config 00:01:26.302 graph: explicitly disabled via build config 00:01:26.302 pdump: explicitly disabled via build config 00:01:26.302 proc-info: explicitly disabled via build config 00:01:26.302 test-acl: explicitly disabled via build config 00:01:26.302 test-bbdev: explicitly disabled via build config 00:01:26.302 test-cmdline: explicitly disabled via build config 00:01:26.302 test-compress-perf: explicitly disabled via build config 00:01:26.302 test-crypto-perf: explicitly disabled via build config 00:01:26.302 test-dma-perf: explicitly disabled via build config 00:01:26.302 test-eventdev: explicitly disabled via build config 00:01:26.302 test-fib: explicitly disabled via build config 00:01:26.302 test-flow-perf: explicitly disabled via build config 00:01:26.302 test-gpudev: explicitly disabled via build config 00:01:26.302 test-mldev: explicitly disabled via build config 00:01:26.302 test-pipeline: explicitly disabled via build config 00:01:26.302 test-pmd: explicitly disabled via build config 00:01:26.302 test-regex: explicitly disabled via build config 00:01:26.302 test-sad: explicitly disabled via build config 00:01:26.302 test-security-perf: explicitly disabled via build config 00:01:26.302 00:01:26.302 libs: 00:01:26.302 metrics: explicitly disabled via build config 00:01:26.302 acl: explicitly disabled via build config 00:01:26.302 bbdev: explicitly disabled via build config 00:01:26.302 bitratestats: explicitly disabled via build config 00:01:26.302 bpf: explicitly disabled via build config 00:01:26.302 cfgfile: explicitly disabled via build config 00:01:26.302 distributor: explicitly disabled via build config 00:01:26.302 efd: explicitly disabled via build config 00:01:26.302 eventdev: explicitly disabled via build config 00:01:26.302 dispatcher: explicitly disabled via build config 00:01:26.302 gpudev: explicitly disabled via build config 00:01:26.302 gro: explicitly disabled via build config 00:01:26.302 gso: explicitly disabled via build config 00:01:26.302 ip_frag: explicitly disabled via build config 00:01:26.302 jobstats: explicitly disabled via build config 00:01:26.302 latencystats: explicitly disabled via build config 00:01:26.302 lpm: explicitly disabled via build config 00:01:26.302 member: explicitly disabled via build config 00:01:26.302 pcapng: explicitly disabled via build config 00:01:26.302 rawdev: explicitly disabled via build config 00:01:26.302 regexdev: explicitly disabled via build config 00:01:26.302 mldev: explicitly disabled via build config 00:01:26.302 rib: explicitly disabled via build config 00:01:26.302 sched: explicitly disabled via build config 00:01:26.302 stack: explicitly disabled via build config 00:01:26.302 ipsec: explicitly disabled via build config 00:01:26.302 pdcp: explicitly disabled via build config 00:01:26.302 fib: explicitly disabled via build config 00:01:26.302 port: explicitly disabled via build config 00:01:26.302 pdump: explicitly disabled via build config 00:01:26.302 table: explicitly disabled via build config 00:01:26.302 pipeline: explicitly disabled via build config 00:01:26.302 graph: explicitly disabled via build config 00:01:26.302 node: explicitly disabled via build config 00:01:26.302 00:01:26.302 drivers: 00:01:26.302 common/cpt: not in enabled drivers build config 00:01:26.302 common/dpaax: not in enabled drivers build config 00:01:26.302 common/iavf: not in enabled drivers build config 00:01:26.302 common/idpf: not in enabled drivers build config 00:01:26.302 common/mvep: not in enabled drivers build config 00:01:26.302 common/octeontx: not in enabled drivers build config 00:01:26.302 bus/auxiliary: not in enabled drivers build config 00:01:26.302 bus/cdx: not in enabled drivers build config 00:01:26.303 bus/dpaa: not in enabled drivers build config 00:01:26.303 bus/fslmc: not in enabled drivers build config 00:01:26.303 bus/ifpga: not in enabled drivers build config 00:01:26.303 bus/platform: not in enabled drivers build config 00:01:26.303 bus/vmbus: not in enabled drivers build config 00:01:26.303 common/cnxk: not in enabled drivers build config 00:01:26.303 common/mlx5: not in enabled drivers build config 00:01:26.303 common/nfp: not in enabled drivers build config 00:01:26.303 common/qat: not in enabled drivers build config 00:01:26.303 common/sfc_efx: not in enabled drivers build config 00:01:26.303 mempool/bucket: not in enabled drivers build config 00:01:26.303 mempool/cnxk: not in enabled drivers build config 00:01:26.303 mempool/dpaa: not in enabled drivers build config 00:01:26.303 mempool/dpaa2: not in enabled drivers build config 00:01:26.303 mempool/octeontx: not in enabled drivers build config 00:01:26.303 mempool/stack: not in enabled drivers build config 00:01:26.303 dma/cnxk: not in enabled drivers build config 00:01:26.303 dma/dpaa: not in enabled drivers build config 00:01:26.303 dma/dpaa2: not in enabled drivers build config 00:01:26.303 dma/hisilicon: not in enabled drivers build config 00:01:26.303 dma/idxd: not in enabled drivers build config 00:01:26.303 dma/ioat: not in enabled drivers build config 00:01:26.303 dma/skeleton: not in enabled drivers build config 00:01:26.303 net/af_packet: not in enabled drivers build config 00:01:26.303 net/af_xdp: not in enabled drivers build config 00:01:26.303 net/ark: not in enabled drivers build config 00:01:26.303 net/atlantic: not in enabled drivers build config 00:01:26.303 net/avp: not in enabled drivers build config 00:01:26.303 net/axgbe: not in enabled drivers build config 00:01:26.303 net/bnx2x: not in enabled drivers build config 00:01:26.303 net/bnxt: not in enabled drivers build config 00:01:26.303 net/bonding: not in enabled drivers build config 00:01:26.303 net/cnxk: not in enabled drivers build config 00:01:26.303 net/cpfl: not in enabled drivers build config 00:01:26.303 net/cxgbe: not in enabled drivers build config 00:01:26.303 net/dpaa: not in enabled drivers build config 00:01:26.303 net/dpaa2: not in enabled drivers build config 00:01:26.303 net/e1000: not in enabled drivers build config 00:01:26.303 net/ena: not in enabled drivers build config 00:01:26.303 net/enetc: not in enabled drivers build config 00:01:26.303 net/enetfec: not in enabled drivers build config 00:01:26.303 net/enic: not in enabled drivers build config 00:01:26.303 net/failsafe: not in enabled drivers build config 00:01:26.303 net/fm10k: not in enabled drivers build config 00:01:26.303 net/gve: not in enabled drivers build config 00:01:26.303 net/hinic: not in enabled drivers build config 00:01:26.303 net/hns3: not in enabled drivers build config 00:01:26.303 net/i40e: not in enabled drivers build config 00:01:26.303 net/iavf: not in enabled drivers build config 00:01:26.303 net/ice: not in enabled drivers build config 00:01:26.303 net/idpf: not in enabled drivers build config 00:01:26.303 net/igc: not in enabled drivers build config 00:01:26.303 net/ionic: not in enabled drivers build config 00:01:26.303 net/ipn3ke: not in enabled drivers build config 00:01:26.303 net/ixgbe: not in enabled drivers build config 00:01:26.303 net/mana: not in enabled drivers build config 00:01:26.303 net/memif: not in enabled drivers build config 00:01:26.303 net/mlx4: not in enabled drivers build config 00:01:26.303 net/mlx5: not in enabled drivers build config 00:01:26.303 net/mvneta: not in enabled drivers build config 00:01:26.303 net/mvpp2: not in enabled drivers build config 00:01:26.303 net/netvsc: not in enabled drivers build config 00:01:26.303 net/nfb: not in enabled drivers build config 00:01:26.303 net/nfp: not in enabled drivers build config 00:01:26.303 net/ngbe: not in enabled drivers build config 00:01:26.303 net/null: not in enabled drivers build config 00:01:26.303 net/octeontx: not in enabled drivers build config 00:01:26.303 net/octeon_ep: not in enabled drivers build config 00:01:26.303 net/pcap: not in enabled drivers build config 00:01:26.303 net/pfe: not in enabled drivers build config 00:01:26.303 net/qede: not in enabled drivers build config 00:01:26.303 net/ring: not in enabled drivers build config 00:01:26.303 net/sfc: not in enabled drivers build config 00:01:26.303 net/softnic: not in enabled drivers build config 00:01:26.303 net/tap: not in enabled drivers build config 00:01:26.303 net/thunderx: not in enabled drivers build config 00:01:26.303 net/txgbe: not in enabled drivers build config 00:01:26.303 net/vdev_netvsc: not in enabled drivers build config 00:01:26.303 net/vhost: not in enabled drivers build config 00:01:26.303 net/virtio: not in enabled drivers build config 00:01:26.303 net/vmxnet3: not in enabled drivers build config 00:01:26.303 raw/*: missing internal dependency, "rawdev" 00:01:26.303 crypto/armv8: not in enabled drivers build config 00:01:26.303 crypto/bcmfs: not in enabled drivers build config 00:01:26.303 crypto/caam_jr: not in enabled drivers build config 00:01:26.303 crypto/ccp: not in enabled drivers build config 00:01:26.303 crypto/cnxk: not in enabled drivers build config 00:01:26.303 crypto/dpaa_sec: not in enabled drivers build config 00:01:26.303 crypto/dpaa2_sec: not in enabled drivers build config 00:01:26.303 crypto/ipsec_mb: not in enabled drivers build config 00:01:26.303 crypto/mlx5: not in enabled drivers build config 00:01:26.303 crypto/mvsam: not in enabled drivers build config 00:01:26.303 crypto/nitrox: not in enabled drivers build config 00:01:26.303 crypto/null: not in enabled drivers build config 00:01:26.303 crypto/octeontx: not in enabled drivers build config 00:01:26.303 crypto/openssl: not in enabled drivers build config 00:01:26.303 crypto/scheduler: not in enabled drivers build config 00:01:26.303 crypto/uadk: not in enabled drivers build config 00:01:26.303 crypto/virtio: not in enabled drivers build config 00:01:26.303 compress/isal: not in enabled drivers build config 00:01:26.303 compress/mlx5: not in enabled drivers build config 00:01:26.303 compress/octeontx: not in enabled drivers build config 00:01:26.303 compress/zlib: not in enabled drivers build config 00:01:26.303 regex/*: missing internal dependency, "regexdev" 00:01:26.303 ml/*: missing internal dependency, "mldev" 00:01:26.303 vdpa/ifc: not in enabled drivers build config 00:01:26.303 vdpa/mlx5: not in enabled drivers build config 00:01:26.303 vdpa/nfp: not in enabled drivers build config 00:01:26.303 vdpa/sfc: not in enabled drivers build config 00:01:26.303 event/*: missing internal dependency, "eventdev" 00:01:26.303 baseband/*: missing internal dependency, "bbdev" 00:01:26.303 gpu/*: missing internal dependency, "gpudev" 00:01:26.303 00:01:26.303 00:01:26.562 Build targets in project: 85 00:01:26.562 00:01:26.562 DPDK 23.11.0 00:01:26.562 00:01:26.562 User defined options 00:01:26.562 buildtype : debug 00:01:26.562 default_library : shared 00:01:26.562 libdir : lib 00:01:26.562 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:26.562 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:26.562 c_link_args : 00:01:26.562 cpu_instruction_set: native 00:01:26.562 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:01:26.562 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:01:26.562 enable_docs : false 00:01:26.562 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:26.562 enable_kmods : false 00:01:26.562 tests : false 00:01:26.562 00:01:26.562 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:26.834 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:26.834 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:26.834 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:26.834 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:26.834 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:27.103 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:27.103 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:27.103 [7/265] Linking static target lib/librte_kvargs.a 00:01:27.103 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:27.103 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:27.103 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:27.103 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:27.103 [12/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:27.103 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:27.103 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:27.103 [15/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:27.103 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:27.103 [17/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:27.103 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:27.103 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:27.103 [20/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:27.103 [21/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:27.103 [22/265] Linking static target lib/librte_log.a 00:01:27.103 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:27.103 [24/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:27.103 [25/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:27.103 [26/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:27.103 [27/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:27.103 [28/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:27.103 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:27.103 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:27.103 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:27.103 [32/265] Linking static target lib/librte_pci.a 00:01:27.103 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:27.103 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:27.103 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:27.103 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:27.367 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:27.367 [38/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:27.367 [39/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:27.367 [40/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:27.367 [41/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:27.367 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:27.367 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:27.628 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:27.628 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:27.628 [46/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:27.628 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:27.628 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:27.628 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:27.628 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:27.628 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:27.628 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:27.628 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:27.628 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:27.628 [55/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:27.628 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:27.628 [57/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:27.628 [58/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.628 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:27.628 [60/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:27.628 [61/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:27.628 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:27.628 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:27.628 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:27.628 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:27.628 [66/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:27.628 [67/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:27.628 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:27.628 [69/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:27.628 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:27.628 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:27.628 [72/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:27.628 [73/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:27.628 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:27.628 [75/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:27.628 [76/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:27.628 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:27.628 [78/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:27.628 [79/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:27.628 [80/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:27.628 [81/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:27.628 [82/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:27.628 [83/265] Linking static target lib/librte_meter.a 00:01:27.628 [84/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.628 [85/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:27.628 [86/265] Linking static target lib/librte_telemetry.a 00:01:27.628 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:27.628 [88/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:27.628 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:27.628 [90/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:27.628 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:27.628 [92/265] Linking static target lib/librte_ring.a 00:01:27.628 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:27.628 [94/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:27.628 [95/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:27.628 [96/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:27.628 [97/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:27.628 [98/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:27.628 [99/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:27.628 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:27.628 [101/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:27.628 [102/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:27.628 [103/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:27.628 [104/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:27.628 [105/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:27.628 [106/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:27.628 [107/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:27.628 [108/265] Linking static target lib/librte_cmdline.a 00:01:27.628 [109/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:27.628 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:27.628 [111/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:27.628 [112/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:27.628 [113/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:27.629 [114/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:27.629 [115/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:27.629 [116/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:27.629 [117/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:27.629 [118/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:27.629 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:27.629 [120/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:27.629 [121/265] Linking static target lib/librte_mempool.a 00:01:27.629 [122/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:27.629 [123/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:27.629 [124/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:27.629 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:27.629 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:27.629 [127/265] Linking static target lib/librte_timer.a 00:01:27.629 [128/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.629 [129/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:27.629 [130/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:27.629 [131/265] Linking static target lib/librte_dmadev.a 00:01:27.629 [132/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:27.629 [133/265] Linking static target lib/librte_rcu.a 00:01:27.629 [134/265] Linking static target lib/librte_net.a 00:01:27.629 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:27.629 [136/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:27.629 [137/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:27.629 [138/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:27.629 [139/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.629 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:27.629 [141/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:27.629 [142/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:27.629 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:27.629 [144/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:27.629 [145/265] Linking static target lib/librte_eal.a 00:01:27.887 [146/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:27.887 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:27.887 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:27.887 [149/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:27.887 [150/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:27.887 [151/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:27.887 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:27.887 [153/265] Linking static target lib/librte_power.a 00:01:27.887 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:27.887 [155/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:27.887 [156/265] Linking static target lib/librte_compressdev.a 00:01:27.887 [157/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.887 [158/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:27.888 [159/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:27.888 [160/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.888 [161/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:27.888 [162/265] Linking target lib/librte_log.so.24.0 00:01:27.888 [163/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:27.888 [164/265] Linking static target lib/librte_mbuf.a 00:01:27.888 [165/265] Linking static target lib/librte_reorder.a 00:01:27.888 [166/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:27.888 [167/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:27.888 [168/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:27.888 [169/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:27.888 [170/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:27.888 [171/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:27.888 [172/265] Linking static target lib/librte_hash.a 00:01:27.888 [173/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:27.888 [174/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.888 [175/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:27.888 [176/265] Linking static target lib/librte_security.a 00:01:27.888 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:27.888 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:27.888 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:28.146 [180/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:28.146 [181/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.146 [182/265] Linking target lib/librte_kvargs.so.24.0 00:01:28.146 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:28.146 [184/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:28.146 [185/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:28.146 [186/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.146 [187/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:28.146 [188/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:28.146 [189/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:28.146 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:28.146 [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:28.146 [192/265] Linking static target lib/librte_cryptodev.a 00:01:28.146 [193/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:28.146 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:28.146 [195/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.146 [196/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:28.146 [197/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:28.146 [198/265] Linking static target drivers/librte_bus_vdev.a 00:01:28.146 [199/265] Linking target lib/librte_telemetry.so.24.0 00:01:28.146 [200/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.146 [201/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:28.146 [202/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.146 [203/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:28.405 [204/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:28.405 [205/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:28.405 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:28.405 [207/265] Linking static target drivers/librte_bus_pci.a 00:01:28.405 [208/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.405 [209/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:28.405 [210/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.405 [211/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:28.405 [212/265] Linking static target drivers/librte_mempool_ring.a 00:01:28.405 [213/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.664 [214/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:28.664 [215/265] Linking static target lib/librte_ethdev.a 00:01:28.664 [216/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.664 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.664 [218/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.664 [219/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:28.664 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.664 [221/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.664 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.922 [223/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.180 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.808 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:29.808 [226/265] Linking static target lib/librte_vhost.a 00:01:30.376 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.756 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.331 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.867 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.867 [231/265] Linking target lib/librte_eal.so.24.0 00:01:40.867 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:40.867 [233/265] Linking target lib/librte_timer.so.24.0 00:01:40.867 [234/265] Linking target lib/librte_meter.so.24.0 00:01:40.867 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:40.867 [236/265] Linking target lib/librte_ring.so.24.0 00:01:40.867 [237/265] Linking target lib/librte_pci.so.24.0 00:01:40.867 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:41.127 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:41.127 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:41.127 [241/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:41.127 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:41.127 [243/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:41.127 [244/265] Linking target lib/librte_mempool.so.24.0 00:01:41.127 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:41.127 [246/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:41.127 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:41.127 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:41.127 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:41.127 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:41.386 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:41.386 [252/265] Linking target lib/librte_net.so.24.0 00:01:41.386 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:01:41.386 [254/265] Linking target lib/librte_reorder.so.24.0 00:01:41.386 [255/265] Linking target lib/librte_compressdev.so.24.0 00:01:41.645 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:41.645 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:41.645 [258/265] Linking target lib/librte_hash.so.24.0 00:01:41.645 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:41.645 [260/265] Linking target lib/librte_security.so.24.0 00:01:41.645 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:41.645 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:41.645 [263/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:41.904 [264/265] Linking target lib/librte_power.so.24.0 00:01:41.904 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:41.904 INFO: autodetecting backend as ninja 00:01:41.904 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:01:42.841 CC lib/ut_mock/mock.o 00:01:42.841 CC lib/ut/ut.o 00:01:42.841 CC lib/log/log_deprecated.o 00:01:42.841 CC lib/log/log.o 00:01:42.841 CC lib/log/log_flags.o 00:01:43.100 LIB libspdk_ut_mock.a 00:01:43.100 SO libspdk_ut_mock.so.6.0 00:01:43.100 LIB libspdk_log.a 00:01:43.100 LIB libspdk_ut.a 00:01:43.100 SO libspdk_log.so.7.0 00:01:43.100 SO libspdk_ut.so.2.0 00:01:43.100 SYMLINK libspdk_ut_mock.so 00:01:43.100 SYMLINK libspdk_log.so 00:01:43.100 SYMLINK libspdk_ut.so 00:01:43.667 CXX lib/trace_parser/trace.o 00:01:43.667 CC lib/dma/dma.o 00:01:43.667 CC lib/ioat/ioat.o 00:01:43.667 CC lib/util/base64.o 00:01:43.667 CC lib/util/bit_array.o 00:01:43.667 CC lib/util/crc32.o 00:01:43.667 CC lib/util/cpuset.o 00:01:43.667 CC lib/util/crc16.o 00:01:43.667 CC lib/util/crc32c.o 00:01:43.667 CC lib/util/crc32_ieee.o 00:01:43.667 CC lib/util/crc64.o 00:01:43.667 CC lib/util/dif.o 00:01:43.667 CC lib/util/fd.o 00:01:43.667 CC lib/util/file.o 00:01:43.667 CC lib/util/hexlify.o 00:01:43.667 CC lib/util/iov.o 00:01:43.667 CC lib/util/strerror_tls.o 00:01:43.667 CC lib/util/math.o 00:01:43.667 CC lib/util/pipe.o 00:01:43.667 CC lib/util/string.o 00:01:43.667 CC lib/util/uuid.o 00:01:43.667 CC lib/util/fd_group.o 00:01:43.667 CC lib/util/xor.o 00:01:43.667 CC lib/util/zipf.o 00:01:43.667 LIB libspdk_dma.a 00:01:43.667 CC lib/vfio_user/host/vfio_user_pci.o 00:01:43.667 CC lib/vfio_user/host/vfio_user.o 00:01:43.667 SO libspdk_dma.so.4.0 00:01:43.667 SYMLINK libspdk_dma.so 00:01:43.667 LIB libspdk_ioat.a 00:01:43.925 SO libspdk_ioat.so.7.0 00:01:43.925 SYMLINK libspdk_ioat.so 00:01:43.925 LIB libspdk_vfio_user.a 00:01:43.925 LIB libspdk_util.a 00:01:43.925 SO libspdk_vfio_user.so.5.0 00:01:43.925 SYMLINK libspdk_vfio_user.so 00:01:43.925 SO libspdk_util.so.9.0 00:01:44.184 LIB libspdk_trace_parser.a 00:01:44.184 SYMLINK libspdk_util.so 00:01:44.184 SO libspdk_trace_parser.so.5.0 00:01:44.184 SYMLINK libspdk_trace_parser.so 00:01:44.442 CC lib/json/json_parse.o 00:01:44.442 CC lib/json/json_util.o 00:01:44.442 CC lib/json/json_write.o 00:01:44.442 CC lib/conf/conf.o 00:01:44.442 CC lib/rdma/common.o 00:01:44.442 CC lib/rdma/rdma_verbs.o 00:01:44.442 CC lib/idxd/idxd.o 00:01:44.442 CC lib/idxd/idxd_user.o 00:01:44.442 CC lib/env_dpdk/env.o 00:01:44.442 CC lib/env_dpdk/memory.o 00:01:44.442 CC lib/vmd/vmd.o 00:01:44.442 CC lib/env_dpdk/pci.o 00:01:44.442 CC lib/env_dpdk/init.o 00:01:44.442 CC lib/vmd/led.o 00:01:44.442 CC lib/env_dpdk/threads.o 00:01:44.442 CC lib/env_dpdk/pci_ioat.o 00:01:44.442 CC lib/env_dpdk/pci_virtio.o 00:01:44.442 CC lib/env_dpdk/pci_vmd.o 00:01:44.442 CC lib/env_dpdk/pci_idxd.o 00:01:44.442 CC lib/env_dpdk/pci_event.o 00:01:44.442 CC lib/env_dpdk/sigbus_handler.o 00:01:44.442 CC lib/env_dpdk/pci_dpdk.o 00:01:44.442 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:44.442 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:44.699 LIB libspdk_conf.a 00:01:44.699 SO libspdk_conf.so.6.0 00:01:44.699 LIB libspdk_json.a 00:01:44.699 SYMLINK libspdk_conf.so 00:01:44.699 LIB libspdk_rdma.a 00:01:44.957 SO libspdk_json.so.6.0 00:01:44.957 SO libspdk_rdma.so.6.0 00:01:44.957 SYMLINK libspdk_json.so 00:01:44.957 SYMLINK libspdk_rdma.so 00:01:44.957 LIB libspdk_idxd.a 00:01:44.957 SO libspdk_idxd.so.12.0 00:01:44.957 LIB libspdk_vmd.a 00:01:44.957 SO libspdk_vmd.so.6.0 00:01:44.957 SYMLINK libspdk_idxd.so 00:01:45.215 SYMLINK libspdk_vmd.so 00:01:45.216 CC lib/jsonrpc/jsonrpc_server.o 00:01:45.216 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:45.216 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:45.216 CC lib/jsonrpc/jsonrpc_client.o 00:01:45.475 LIB libspdk_jsonrpc.a 00:01:45.475 LIB libspdk_env_dpdk.a 00:01:45.475 SO libspdk_jsonrpc.so.6.0 00:01:45.475 SO libspdk_env_dpdk.so.14.0 00:01:45.475 SYMLINK libspdk_jsonrpc.so 00:01:45.733 SYMLINK libspdk_env_dpdk.so 00:01:45.992 CC lib/rpc/rpc.o 00:01:46.252 LIB libspdk_rpc.a 00:01:46.252 SO libspdk_rpc.so.6.0 00:01:46.252 SYMLINK libspdk_rpc.so 00:01:46.512 CC lib/keyring/keyring.o 00:01:46.512 CC lib/keyring/keyring_rpc.o 00:01:46.512 CC lib/trace/trace.o 00:01:46.512 CC lib/trace/trace_flags.o 00:01:46.512 CC lib/trace/trace_rpc.o 00:01:46.512 CC lib/notify/notify.o 00:01:46.512 CC lib/notify/notify_rpc.o 00:01:46.770 LIB libspdk_notify.a 00:01:46.770 LIB libspdk_keyring.a 00:01:46.770 SO libspdk_notify.so.6.0 00:01:46.770 LIB libspdk_trace.a 00:01:46.770 SO libspdk_keyring.so.1.0 00:01:46.770 SO libspdk_trace.so.10.0 00:01:46.770 SYMLINK libspdk_notify.so 00:01:47.027 SYMLINK libspdk_keyring.so 00:01:47.027 SYMLINK libspdk_trace.so 00:01:47.285 CC lib/thread/thread.o 00:01:47.285 CC lib/thread/iobuf.o 00:01:47.285 CC lib/sock/sock.o 00:01:47.285 CC lib/sock/sock_rpc.o 00:01:47.544 LIB libspdk_sock.a 00:01:47.544 SO libspdk_sock.so.9.0 00:01:47.802 SYMLINK libspdk_sock.so 00:01:48.129 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:48.130 CC lib/nvme/nvme_ctrlr.o 00:01:48.130 CC lib/nvme/nvme_fabric.o 00:01:48.130 CC lib/nvme/nvme_ns_cmd.o 00:01:48.130 CC lib/nvme/nvme_ns.o 00:01:48.130 CC lib/nvme/nvme_pcie_common.o 00:01:48.130 CC lib/nvme/nvme_pcie.o 00:01:48.130 CC lib/nvme/nvme_qpair.o 00:01:48.130 CC lib/nvme/nvme.o 00:01:48.130 CC lib/nvme/nvme_quirks.o 00:01:48.130 CC lib/nvme/nvme_transport.o 00:01:48.130 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:48.130 CC lib/nvme/nvme_discovery.o 00:01:48.130 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:48.130 CC lib/nvme/nvme_tcp.o 00:01:48.130 CC lib/nvme/nvme_opal.o 00:01:48.130 CC lib/nvme/nvme_io_msg.o 00:01:48.130 CC lib/nvme/nvme_poll_group.o 00:01:48.130 CC lib/nvme/nvme_zns.o 00:01:48.130 CC lib/nvme/nvme_stubs.o 00:01:48.130 CC lib/nvme/nvme_auth.o 00:01:48.130 CC lib/nvme/nvme_cuse.o 00:01:48.130 CC lib/nvme/nvme_vfio_user.o 00:01:48.130 CC lib/nvme/nvme_rdma.o 00:01:48.389 LIB libspdk_thread.a 00:01:48.389 SO libspdk_thread.so.10.0 00:01:48.389 SYMLINK libspdk_thread.so 00:01:48.648 CC lib/vfu_tgt/tgt_rpc.o 00:01:48.648 CC lib/vfu_tgt/tgt_endpoint.o 00:01:48.648 CC lib/virtio/virtio_vhost_user.o 00:01:48.648 CC lib/virtio/virtio.o 00:01:48.648 CC lib/virtio/virtio_pci.o 00:01:48.648 CC lib/virtio/virtio_vfio_user.o 00:01:48.648 CC lib/blob/blobstore.o 00:01:48.648 CC lib/blob/request.o 00:01:48.648 CC lib/accel/accel.o 00:01:48.648 CC lib/blob/zeroes.o 00:01:48.648 CC lib/accel/accel_rpc.o 00:01:48.648 CC lib/blob/blob_bs_dev.o 00:01:48.648 CC lib/accel/accel_sw.o 00:01:48.648 CC lib/init/json_config.o 00:01:48.648 CC lib/init/subsystem.o 00:01:48.648 CC lib/init/subsystem_rpc.o 00:01:48.648 CC lib/init/rpc.o 00:01:48.908 LIB libspdk_init.a 00:01:48.908 LIB libspdk_vfu_tgt.a 00:01:48.908 LIB libspdk_virtio.a 00:01:48.908 SO libspdk_init.so.5.0 00:01:49.167 SO libspdk_vfu_tgt.so.3.0 00:01:49.167 SO libspdk_virtio.so.7.0 00:01:49.167 SYMLINK libspdk_init.so 00:01:49.167 SYMLINK libspdk_vfu_tgt.so 00:01:49.167 SYMLINK libspdk_virtio.so 00:01:49.426 CC lib/event/app.o 00:01:49.426 CC lib/event/reactor.o 00:01:49.426 CC lib/event/app_rpc.o 00:01:49.426 CC lib/event/log_rpc.o 00:01:49.426 CC lib/event/scheduler_static.o 00:01:49.426 LIB libspdk_accel.a 00:01:49.426 SO libspdk_accel.so.15.0 00:01:49.686 SYMLINK libspdk_accel.so 00:01:49.686 LIB libspdk_nvme.a 00:01:49.686 SO libspdk_nvme.so.13.0 00:01:49.686 LIB libspdk_event.a 00:01:49.945 SO libspdk_event.so.13.0 00:01:49.945 SYMLINK libspdk_event.so 00:01:49.945 CC lib/bdev/bdev.o 00:01:49.945 CC lib/bdev/bdev_zone.o 00:01:49.945 CC lib/bdev/bdev_rpc.o 00:01:49.945 CC lib/bdev/scsi_nvme.o 00:01:49.945 CC lib/bdev/part.o 00:01:49.945 SYMLINK libspdk_nvme.so 00:01:50.884 LIB libspdk_blob.a 00:01:50.884 SO libspdk_blob.so.11.0 00:01:50.884 SYMLINK libspdk_blob.so 00:01:51.143 CC lib/lvol/lvol.o 00:01:51.403 CC lib/blobfs/blobfs.o 00:01:51.403 CC lib/blobfs/tree.o 00:01:51.663 LIB libspdk_bdev.a 00:01:51.663 SO libspdk_bdev.so.15.0 00:01:51.922 LIB libspdk_blobfs.a 00:01:51.922 SYMLINK libspdk_bdev.so 00:01:51.922 LIB libspdk_lvol.a 00:01:51.922 SO libspdk_lvol.so.10.0 00:01:51.922 SO libspdk_blobfs.so.10.0 00:01:51.922 SYMLINK libspdk_lvol.so 00:01:51.922 SYMLINK libspdk_blobfs.so 00:01:52.182 CC lib/ublk/ublk.o 00:01:52.182 CC lib/ublk/ublk_rpc.o 00:01:52.182 CC lib/ftl/ftl_core.o 00:01:52.182 CC lib/scsi/dev.o 00:01:52.182 CC lib/scsi/lun.o 00:01:52.182 CC lib/ftl/ftl_init.o 00:01:52.182 CC lib/ftl/ftl_layout.o 00:01:52.182 CC lib/scsi/port.o 00:01:52.182 CC lib/nvmf/ctrlr.o 00:01:52.182 CC lib/nbd/nbd.o 00:01:52.182 CC lib/ftl/ftl_debug.o 00:01:52.182 CC lib/scsi/scsi.o 00:01:52.182 CC lib/ftl/ftl_io.o 00:01:52.182 CC lib/nbd/nbd_rpc.o 00:01:52.182 CC lib/scsi/scsi_bdev.o 00:01:52.182 CC lib/nvmf/ctrlr_discovery.o 00:01:52.182 CC lib/ftl/ftl_sb.o 00:01:52.182 CC lib/scsi/scsi_pr.o 00:01:52.182 CC lib/ftl/ftl_l2p.o 00:01:52.182 CC lib/nvmf/ctrlr_bdev.o 00:01:52.182 CC lib/ftl/ftl_l2p_flat.o 00:01:52.182 CC lib/scsi/scsi_rpc.o 00:01:52.182 CC lib/nvmf/subsystem.o 00:01:52.182 CC lib/scsi/task.o 00:01:52.182 CC lib/ftl/ftl_nv_cache.o 00:01:52.182 CC lib/ftl/ftl_band.o 00:01:52.182 CC lib/nvmf/nvmf.o 00:01:52.182 CC lib/nvmf/nvmf_rpc.o 00:01:52.182 CC lib/ftl/ftl_band_ops.o 00:01:52.182 CC lib/nvmf/transport.o 00:01:52.182 CC lib/ftl/ftl_writer.o 00:01:52.182 CC lib/ftl/ftl_rq.o 00:01:52.182 CC lib/nvmf/tcp.o 00:01:52.182 CC lib/ftl/ftl_reloc.o 00:01:52.182 CC lib/nvmf/stubs.o 00:01:52.182 CC lib/ftl/ftl_l2p_cache.o 00:01:52.182 CC lib/nvmf/vfio_user.o 00:01:52.182 CC lib/ftl/ftl_p2l.o 00:01:52.182 CC lib/nvmf/rdma.o 00:01:52.182 CC lib/nvmf/auth.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:52.182 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:52.183 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:52.183 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:52.183 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:52.183 CC lib/ftl/utils/ftl_conf.o 00:01:52.183 CC lib/ftl/utils/ftl_md.o 00:01:52.183 CC lib/ftl/utils/ftl_mempool.o 00:01:52.183 CC lib/ftl/utils/ftl_bitmap.o 00:01:52.183 CC lib/ftl/utils/ftl_property.o 00:01:52.183 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:52.183 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:52.183 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:52.183 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:52.183 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:52.183 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:52.183 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:52.183 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:52.183 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:52.183 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:52.183 CC lib/ftl/base/ftl_base_bdev.o 00:01:52.183 CC lib/ftl/base/ftl_base_dev.o 00:01:52.183 CC lib/ftl/ftl_trace.o 00:01:52.750 LIB libspdk_nbd.a 00:01:52.750 SO libspdk_nbd.so.7.0 00:01:52.750 LIB libspdk_scsi.a 00:01:52.750 SYMLINK libspdk_nbd.so 00:01:52.750 SO libspdk_scsi.so.9.0 00:01:52.750 LIB libspdk_ublk.a 00:01:53.009 SO libspdk_ublk.so.3.0 00:01:53.009 SYMLINK libspdk_scsi.so 00:01:53.009 SYMLINK libspdk_ublk.so 00:01:53.009 LIB libspdk_ftl.a 00:01:53.268 SO libspdk_ftl.so.9.0 00:01:53.268 CC lib/iscsi/iscsi.o 00:01:53.268 CC lib/iscsi/conn.o 00:01:53.268 CC lib/iscsi/init_grp.o 00:01:53.268 CC lib/iscsi/md5.o 00:01:53.268 CC lib/iscsi/param.o 00:01:53.268 CC lib/vhost/vhost.o 00:01:53.268 CC lib/vhost/vhost_rpc.o 00:01:53.268 CC lib/iscsi/portal_grp.o 00:01:53.268 CC lib/vhost/vhost_scsi.o 00:01:53.268 CC lib/vhost/vhost_blk.o 00:01:53.268 CC lib/iscsi/tgt_node.o 00:01:53.268 CC lib/vhost/rte_vhost_user.o 00:01:53.268 CC lib/iscsi/iscsi_subsystem.o 00:01:53.268 CC lib/iscsi/iscsi_rpc.o 00:01:53.268 CC lib/iscsi/task.o 00:01:53.527 SYMLINK libspdk_ftl.so 00:01:53.787 LIB libspdk_nvmf.a 00:01:54.047 SO libspdk_nvmf.so.18.0 00:01:54.047 LIB libspdk_vhost.a 00:01:54.047 SO libspdk_vhost.so.8.0 00:01:54.047 SYMLINK libspdk_nvmf.so 00:01:54.306 SYMLINK libspdk_vhost.so 00:01:54.306 LIB libspdk_iscsi.a 00:01:54.307 SO libspdk_iscsi.so.8.0 00:01:54.566 SYMLINK libspdk_iscsi.so 00:01:55.135 CC module/vfu_device/vfu_virtio.o 00:01:55.135 CC module/env_dpdk/env_dpdk_rpc.o 00:01:55.135 CC module/vfu_device/vfu_virtio_blk.o 00:01:55.135 CC module/vfu_device/vfu_virtio_scsi.o 00:01:55.135 CC module/vfu_device/vfu_virtio_rpc.o 00:01:55.135 LIB libspdk_env_dpdk_rpc.a 00:01:55.135 CC module/keyring/file/keyring.o 00:01:55.135 CC module/keyring/file/keyring_rpc.o 00:01:55.135 CC module/accel/ioat/accel_ioat.o 00:01:55.135 CC module/accel/error/accel_error.o 00:01:55.135 CC module/accel/ioat/accel_ioat_rpc.o 00:01:55.135 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:55.135 CC module/accel/error/accel_error_rpc.o 00:01:55.135 CC module/accel/dsa/accel_dsa.o 00:01:55.135 CC module/accel/dsa/accel_dsa_rpc.o 00:01:55.135 SO libspdk_env_dpdk_rpc.so.6.0 00:01:55.135 CC module/scheduler/gscheduler/gscheduler.o 00:01:55.135 CC module/blob/bdev/blob_bdev.o 00:01:55.135 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:55.135 CC module/sock/posix/posix.o 00:01:55.135 CC module/accel/iaa/accel_iaa.o 00:01:55.135 CC module/accel/iaa/accel_iaa_rpc.o 00:01:55.135 SYMLINK libspdk_env_dpdk_rpc.so 00:01:55.395 LIB libspdk_keyring_file.a 00:01:55.395 LIB libspdk_scheduler_dpdk_governor.a 00:01:55.395 LIB libspdk_accel_ioat.a 00:01:55.395 LIB libspdk_scheduler_gscheduler.a 00:01:55.395 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:55.395 SO libspdk_keyring_file.so.1.0 00:01:55.395 LIB libspdk_accel_error.a 00:01:55.395 SO libspdk_accel_ioat.so.6.0 00:01:55.395 LIB libspdk_scheduler_dynamic.a 00:01:55.395 SO libspdk_scheduler_gscheduler.so.4.0 00:01:55.395 SYMLINK libspdk_keyring_file.so 00:01:55.395 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:55.395 SO libspdk_scheduler_dynamic.so.4.0 00:01:55.395 LIB libspdk_accel_dsa.a 00:01:55.395 SO libspdk_accel_error.so.2.0 00:01:55.395 LIB libspdk_accel_iaa.a 00:01:55.395 SYMLINK libspdk_accel_ioat.so 00:01:55.395 SO libspdk_accel_dsa.so.5.0 00:01:55.395 LIB libspdk_blob_bdev.a 00:01:55.395 SYMLINK libspdk_scheduler_gscheduler.so 00:01:55.395 SO libspdk_accel_iaa.so.3.0 00:01:55.395 SYMLINK libspdk_scheduler_dynamic.so 00:01:55.395 SYMLINK libspdk_accel_error.so 00:01:55.395 SO libspdk_blob_bdev.so.11.0 00:01:55.395 LIB libspdk_vfu_device.a 00:01:55.395 SYMLINK libspdk_accel_dsa.so 00:01:55.395 SYMLINK libspdk_accel_iaa.so 00:01:55.655 SYMLINK libspdk_blob_bdev.so 00:01:55.655 SO libspdk_vfu_device.so.3.0 00:01:55.655 SYMLINK libspdk_vfu_device.so 00:01:55.655 LIB libspdk_sock_posix.a 00:01:55.655 SO libspdk_sock_posix.so.6.0 00:01:55.914 SYMLINK libspdk_sock_posix.so 00:01:55.914 CC module/bdev/lvol/vbdev_lvol.o 00:01:55.914 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:55.914 CC module/bdev/gpt/gpt.o 00:01:55.914 CC module/bdev/raid/bdev_raid_sb.o 00:01:55.914 CC module/bdev/raid/bdev_raid.o 00:01:55.914 CC module/bdev/gpt/vbdev_gpt.o 00:01:55.914 CC module/bdev/raid/bdev_raid_rpc.o 00:01:55.914 CC module/bdev/null/bdev_null_rpc.o 00:01:55.914 CC module/bdev/aio/bdev_aio.o 00:01:55.914 CC module/bdev/null/bdev_null.o 00:01:55.914 CC module/bdev/raid/raid1.o 00:01:55.914 CC module/bdev/raid/raid0.o 00:01:55.914 CC module/bdev/aio/bdev_aio_rpc.o 00:01:55.914 CC module/bdev/iscsi/bdev_iscsi.o 00:01:55.914 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:55.914 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:55.914 CC module/bdev/raid/concat.o 00:01:55.914 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:55.914 CC module/bdev/malloc/bdev_malloc.o 00:01:55.914 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:55.914 CC module/bdev/delay/vbdev_delay.o 00:01:55.914 CC module/bdev/ftl/bdev_ftl.o 00:01:55.914 CC module/bdev/nvme/bdev_nvme.o 00:01:55.914 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:55.914 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:55.914 CC module/bdev/nvme/nvme_rpc.o 00:01:55.914 CC module/bdev/passthru/vbdev_passthru.o 00:01:55.914 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:55.914 CC module/bdev/nvme/bdev_mdns_client.o 00:01:55.914 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:55.915 CC module/bdev/nvme/vbdev_opal.o 00:01:55.915 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:55.915 CC module/bdev/split/vbdev_split.o 00:01:55.915 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:55.915 CC module/bdev/split/vbdev_split_rpc.o 00:01:55.915 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:55.915 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:55.915 CC module/bdev/error/vbdev_error.o 00:01:55.915 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:56.173 CC module/bdev/error/vbdev_error_rpc.o 00:01:56.173 CC module/blobfs/bdev/blobfs_bdev.o 00:01:56.173 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:56.173 LIB libspdk_blobfs_bdev.a 00:01:56.431 LIB libspdk_bdev_null.a 00:01:56.431 SO libspdk_blobfs_bdev.so.6.0 00:01:56.431 LIB libspdk_bdev_split.a 00:01:56.431 SO libspdk_bdev_null.so.6.0 00:01:56.431 LIB libspdk_bdev_gpt.a 00:01:56.431 LIB libspdk_bdev_error.a 00:01:56.431 LIB libspdk_bdev_ftl.a 00:01:56.431 SO libspdk_bdev_split.so.6.0 00:01:56.431 LIB libspdk_bdev_passthru.a 00:01:56.431 SYMLINK libspdk_blobfs_bdev.so 00:01:56.431 SO libspdk_bdev_gpt.so.6.0 00:01:56.431 SO libspdk_bdev_error.so.6.0 00:01:56.431 LIB libspdk_bdev_aio.a 00:01:56.431 SYMLINK libspdk_bdev_null.so 00:01:56.431 LIB libspdk_bdev_delay.a 00:01:56.431 LIB libspdk_bdev_zone_block.a 00:01:56.431 SO libspdk_bdev_ftl.so.6.0 00:01:56.431 SO libspdk_bdev_passthru.so.6.0 00:01:56.431 LIB libspdk_bdev_iscsi.a 00:01:56.431 LIB libspdk_bdev_malloc.a 00:01:56.431 SO libspdk_bdev_aio.so.6.0 00:01:56.431 SYMLINK libspdk_bdev_split.so 00:01:56.431 SO libspdk_bdev_delay.so.6.0 00:01:56.431 SYMLINK libspdk_bdev_gpt.so 00:01:56.431 SO libspdk_bdev_zone_block.so.6.0 00:01:56.431 SYMLINK libspdk_bdev_error.so 00:01:56.431 SO libspdk_bdev_malloc.so.6.0 00:01:56.431 SO libspdk_bdev_iscsi.so.6.0 00:01:56.431 SYMLINK libspdk_bdev_ftl.so 00:01:56.431 SYMLINK libspdk_bdev_passthru.so 00:01:56.431 SYMLINK libspdk_bdev_aio.so 00:01:56.431 LIB libspdk_bdev_virtio.a 00:01:56.431 LIB libspdk_bdev_lvol.a 00:01:56.431 SYMLINK libspdk_bdev_zone_block.so 00:01:56.431 SYMLINK libspdk_bdev_delay.so 00:01:56.431 SYMLINK libspdk_bdev_malloc.so 00:01:56.431 SYMLINK libspdk_bdev_iscsi.so 00:01:56.690 SO libspdk_bdev_lvol.so.6.0 00:01:56.690 SO libspdk_bdev_virtio.so.6.0 00:01:56.690 SYMLINK libspdk_bdev_virtio.so 00:01:56.690 SYMLINK libspdk_bdev_lvol.so 00:01:56.690 LIB libspdk_bdev_raid.a 00:01:56.951 SO libspdk_bdev_raid.so.6.0 00:01:56.951 SYMLINK libspdk_bdev_raid.so 00:01:57.520 LIB libspdk_bdev_nvme.a 00:01:57.779 SO libspdk_bdev_nvme.so.7.0 00:01:57.779 SYMLINK libspdk_bdev_nvme.so 00:01:58.346 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:58.346 CC module/event/subsystems/iobuf/iobuf.o 00:01:58.346 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:58.346 CC module/event/subsystems/scheduler/scheduler.o 00:01:58.346 CC module/event/subsystems/sock/sock.o 00:01:58.346 CC module/event/subsystems/keyring/keyring.o 00:01:58.346 CC module/event/subsystems/vmd/vmd.o 00:01:58.346 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:58.346 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:58.605 LIB libspdk_event_iobuf.a 00:01:58.605 LIB libspdk_event_scheduler.a 00:01:58.605 LIB libspdk_event_sock.a 00:01:58.605 LIB libspdk_event_vhost_blk.a 00:01:58.605 LIB libspdk_event_keyring.a 00:01:58.605 SO libspdk_event_iobuf.so.3.0 00:01:58.605 LIB libspdk_event_vfu_tgt.a 00:01:58.605 LIB libspdk_event_vmd.a 00:01:58.605 SO libspdk_event_scheduler.so.4.0 00:01:58.605 SO libspdk_event_sock.so.5.0 00:01:58.605 SO libspdk_event_vhost_blk.so.3.0 00:01:58.605 SO libspdk_event_keyring.so.1.0 00:01:58.605 SO libspdk_event_vfu_tgt.so.3.0 00:01:58.605 SO libspdk_event_vmd.so.6.0 00:01:58.605 SYMLINK libspdk_event_sock.so 00:01:58.605 SYMLINK libspdk_event_iobuf.so 00:01:58.605 SYMLINK libspdk_event_scheduler.so 00:01:58.605 SYMLINK libspdk_event_vhost_blk.so 00:01:58.605 SYMLINK libspdk_event_keyring.so 00:01:58.605 SYMLINK libspdk_event_vfu_tgt.so 00:01:58.605 SYMLINK libspdk_event_vmd.so 00:01:58.864 CC module/event/subsystems/accel/accel.o 00:01:59.124 LIB libspdk_event_accel.a 00:01:59.124 SO libspdk_event_accel.so.6.0 00:01:59.124 SYMLINK libspdk_event_accel.so 00:01:59.692 CC module/event/subsystems/bdev/bdev.o 00:01:59.692 LIB libspdk_event_bdev.a 00:01:59.692 SO libspdk_event_bdev.so.6.0 00:01:59.692 SYMLINK libspdk_event_bdev.so 00:02:00.264 CC module/event/subsystems/ublk/ublk.o 00:02:00.264 CC module/event/subsystems/scsi/scsi.o 00:02:00.264 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:00.264 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:00.264 CC module/event/subsystems/nbd/nbd.o 00:02:00.264 LIB libspdk_event_ublk.a 00:02:00.264 SO libspdk_event_ublk.so.3.0 00:02:00.264 LIB libspdk_event_scsi.a 00:02:00.264 LIB libspdk_event_nbd.a 00:02:00.264 SO libspdk_event_scsi.so.6.0 00:02:00.264 LIB libspdk_event_nvmf.a 00:02:00.264 SYMLINK libspdk_event_ublk.so 00:02:00.264 SO libspdk_event_nbd.so.6.0 00:02:00.264 SO libspdk_event_nvmf.so.6.0 00:02:00.525 SYMLINK libspdk_event_nbd.so 00:02:00.525 SYMLINK libspdk_event_scsi.so 00:02:00.526 SYMLINK libspdk_event_nvmf.so 00:02:00.827 CC module/event/subsystems/iscsi/iscsi.o 00:02:00.827 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:00.827 LIB libspdk_event_vhost_scsi.a 00:02:00.827 LIB libspdk_event_iscsi.a 00:02:00.827 SO libspdk_event_vhost_scsi.so.3.0 00:02:01.087 SO libspdk_event_iscsi.so.6.0 00:02:01.087 SYMLINK libspdk_event_iscsi.so 00:02:01.087 SYMLINK libspdk_event_vhost_scsi.so 00:02:01.345 SO libspdk.so.6.0 00:02:01.345 SYMLINK libspdk.so 00:02:01.611 CC app/spdk_lspci/spdk_lspci.o 00:02:01.611 TEST_HEADER include/spdk/accel.h 00:02:01.611 CXX app/trace/trace.o 00:02:01.611 TEST_HEADER include/spdk/accel_module.h 00:02:01.611 TEST_HEADER include/spdk/assert.h 00:02:01.611 TEST_HEADER include/spdk/barrier.h 00:02:01.611 TEST_HEADER include/spdk/bdev.h 00:02:01.611 TEST_HEADER include/spdk/base64.h 00:02:01.611 TEST_HEADER include/spdk/bdev_module.h 00:02:01.611 TEST_HEADER include/spdk/bdev_zone.h 00:02:01.611 TEST_HEADER include/spdk/bit_pool.h 00:02:01.611 TEST_HEADER include/spdk/bit_array.h 00:02:01.611 CC test/rpc_client/rpc_client_test.o 00:02:01.611 TEST_HEADER include/spdk/blob_bdev.h 00:02:01.611 CC app/spdk_nvme_identify/identify.o 00:02:01.611 TEST_HEADER include/spdk/blobfs.h 00:02:01.611 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:01.611 TEST_HEADER include/spdk/blob.h 00:02:01.611 TEST_HEADER include/spdk/conf.h 00:02:01.611 TEST_HEADER include/spdk/config.h 00:02:01.611 TEST_HEADER include/spdk/cpuset.h 00:02:01.611 TEST_HEADER include/spdk/crc16.h 00:02:01.611 TEST_HEADER include/spdk/crc32.h 00:02:01.611 CC app/trace_record/trace_record.o 00:02:01.611 TEST_HEADER include/spdk/crc64.h 00:02:01.611 CC app/spdk_nvme_discover/discovery_aer.o 00:02:01.611 TEST_HEADER include/spdk/dif.h 00:02:01.611 CC app/spdk_top/spdk_top.o 00:02:01.611 TEST_HEADER include/spdk/endian.h 00:02:01.611 TEST_HEADER include/spdk/dma.h 00:02:01.611 TEST_HEADER include/spdk/env_dpdk.h 00:02:01.611 TEST_HEADER include/spdk/env.h 00:02:01.611 TEST_HEADER include/spdk/event.h 00:02:01.611 TEST_HEADER include/spdk/fd_group.h 00:02:01.611 TEST_HEADER include/spdk/fd.h 00:02:01.611 TEST_HEADER include/spdk/file.h 00:02:01.611 TEST_HEADER include/spdk/hexlify.h 00:02:01.611 TEST_HEADER include/spdk/ftl.h 00:02:01.611 TEST_HEADER include/spdk/gpt_spec.h 00:02:01.611 TEST_HEADER include/spdk/histogram_data.h 00:02:01.611 TEST_HEADER include/spdk/idxd.h 00:02:01.611 CC app/spdk_nvme_perf/perf.o 00:02:01.611 TEST_HEADER include/spdk/idxd_spec.h 00:02:01.611 TEST_HEADER include/spdk/init.h 00:02:01.611 TEST_HEADER include/spdk/ioat.h 00:02:01.611 TEST_HEADER include/spdk/iscsi_spec.h 00:02:01.611 TEST_HEADER include/spdk/ioat_spec.h 00:02:01.611 TEST_HEADER include/spdk/json.h 00:02:01.611 TEST_HEADER include/spdk/jsonrpc.h 00:02:01.611 TEST_HEADER include/spdk/keyring_module.h 00:02:01.611 TEST_HEADER include/spdk/keyring.h 00:02:01.611 TEST_HEADER include/spdk/likely.h 00:02:01.611 TEST_HEADER include/spdk/lvol.h 00:02:01.611 TEST_HEADER include/spdk/log.h 00:02:01.611 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:01.611 TEST_HEADER include/spdk/mmio.h 00:02:01.611 TEST_HEADER include/spdk/memory.h 00:02:01.611 TEST_HEADER include/spdk/notify.h 00:02:01.611 TEST_HEADER include/spdk/nbd.h 00:02:01.611 TEST_HEADER include/spdk/nvme.h 00:02:01.611 TEST_HEADER include/spdk/nvme_intel.h 00:02:01.611 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:01.611 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:01.611 TEST_HEADER include/spdk/nvme_spec.h 00:02:01.611 TEST_HEADER include/spdk/nvme_zns.h 00:02:01.611 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:01.611 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:01.611 TEST_HEADER include/spdk/nvmf.h 00:02:01.611 TEST_HEADER include/spdk/nvmf_spec.h 00:02:01.611 TEST_HEADER include/spdk/nvmf_transport.h 00:02:01.611 TEST_HEADER include/spdk/opal.h 00:02:01.611 TEST_HEADER include/spdk/opal_spec.h 00:02:01.611 TEST_HEADER include/spdk/pci_ids.h 00:02:01.611 TEST_HEADER include/spdk/pipe.h 00:02:01.611 TEST_HEADER include/spdk/queue.h 00:02:01.611 TEST_HEADER include/spdk/reduce.h 00:02:01.611 TEST_HEADER include/spdk/rpc.h 00:02:01.611 TEST_HEADER include/spdk/scheduler.h 00:02:01.611 TEST_HEADER include/spdk/scsi.h 00:02:01.611 TEST_HEADER include/spdk/scsi_spec.h 00:02:01.611 CC app/spdk_dd/spdk_dd.o 00:02:01.611 TEST_HEADER include/spdk/sock.h 00:02:01.611 TEST_HEADER include/spdk/stdinc.h 00:02:01.611 TEST_HEADER include/spdk/string.h 00:02:01.611 TEST_HEADER include/spdk/thread.h 00:02:01.611 TEST_HEADER include/spdk/trace.h 00:02:01.611 CC app/vhost/vhost.o 00:02:01.611 TEST_HEADER include/spdk/trace_parser.h 00:02:01.611 TEST_HEADER include/spdk/ublk.h 00:02:01.611 TEST_HEADER include/spdk/tree.h 00:02:01.611 TEST_HEADER include/spdk/util.h 00:02:01.611 TEST_HEADER include/spdk/uuid.h 00:02:01.611 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:01.611 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:01.611 TEST_HEADER include/spdk/version.h 00:02:01.611 TEST_HEADER include/spdk/vhost.h 00:02:01.611 TEST_HEADER include/spdk/vmd.h 00:02:01.611 TEST_HEADER include/spdk/xor.h 00:02:01.611 TEST_HEADER include/spdk/zipf.h 00:02:01.611 CXX test/cpp_headers/accel_module.o 00:02:01.611 CXX test/cpp_headers/accel.o 00:02:01.611 CXX test/cpp_headers/assert.o 00:02:01.611 CXX test/cpp_headers/barrier.o 00:02:01.611 CXX test/cpp_headers/base64.o 00:02:01.611 CXX test/cpp_headers/bdev.o 00:02:01.611 CC app/nvmf_tgt/nvmf_main.o 00:02:01.611 CXX test/cpp_headers/bdev_zone.o 00:02:01.611 CXX test/cpp_headers/bdev_module.o 00:02:01.612 CXX test/cpp_headers/bit_array.o 00:02:01.612 CC app/iscsi_tgt/iscsi_tgt.o 00:02:01.612 CXX test/cpp_headers/bit_pool.o 00:02:01.612 CXX test/cpp_headers/blob_bdev.o 00:02:01.612 CXX test/cpp_headers/blobfs_bdev.o 00:02:01.612 CXX test/cpp_headers/blobfs.o 00:02:01.612 CXX test/cpp_headers/blob.o 00:02:01.612 CXX test/cpp_headers/config.o 00:02:01.612 CXX test/cpp_headers/conf.o 00:02:01.612 CXX test/cpp_headers/cpuset.o 00:02:01.612 CXX test/cpp_headers/crc32.o 00:02:01.612 CXX test/cpp_headers/crc16.o 00:02:01.612 CXX test/cpp_headers/crc64.o 00:02:01.612 CXX test/cpp_headers/dif.o 00:02:01.612 CXX test/cpp_headers/dma.o 00:02:01.612 CXX test/cpp_headers/endian.o 00:02:01.612 CXX test/cpp_headers/env_dpdk.o 00:02:01.612 CXX test/cpp_headers/env.o 00:02:01.612 CXX test/cpp_headers/event.o 00:02:01.612 CXX test/cpp_headers/fd_group.o 00:02:01.612 CXX test/cpp_headers/fd.o 00:02:01.612 CXX test/cpp_headers/file.o 00:02:01.612 CXX test/cpp_headers/ftl.o 00:02:01.612 CXX test/cpp_headers/gpt_spec.o 00:02:01.612 CXX test/cpp_headers/hexlify.o 00:02:01.612 CXX test/cpp_headers/histogram_data.o 00:02:01.612 CXX test/cpp_headers/idxd.o 00:02:01.612 CXX test/cpp_headers/init.o 00:02:01.612 CXX test/cpp_headers/ioat.o 00:02:01.883 CXX test/cpp_headers/idxd_spec.o 00:02:01.883 CC app/spdk_tgt/spdk_tgt.o 00:02:01.883 CXX test/cpp_headers/ioat_spec.o 00:02:01.883 CC test/env/vtophys/vtophys.o 00:02:01.883 CC test/env/pci/pci_ut.o 00:02:01.883 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:01.883 CC test/app/histogram_perf/histogram_perf.o 00:02:01.883 CC test/env/memory/memory_ut.o 00:02:01.883 CC test/app/jsoncat/jsoncat.o 00:02:01.883 CC test/app/stub/stub.o 00:02:01.883 CC examples/ioat/perf/perf.o 00:02:01.883 CC examples/accel/perf/accel_perf.o 00:02:01.883 CC examples/ioat/verify/verify.o 00:02:01.883 CC test/nvme/reserve/reserve.o 00:02:01.883 CC examples/util/zipf/zipf.o 00:02:01.883 CC test/nvme/reset/reset.o 00:02:01.883 CC test/nvme/startup/startup.o 00:02:01.883 CC test/event/reactor_perf/reactor_perf.o 00:02:01.883 CC examples/nvme/hotplug/hotplug.o 00:02:01.883 CC test/nvme/fdp/fdp.o 00:02:01.883 CC examples/idxd/perf/perf.o 00:02:01.883 CC examples/sock/hello_world/hello_sock.o 00:02:01.883 CC examples/vmd/led/led.o 00:02:01.883 CC test/nvme/simple_copy/simple_copy.o 00:02:01.883 CC test/nvme/aer/aer.o 00:02:01.883 CC test/nvme/sgl/sgl.o 00:02:01.883 CC examples/nvme/hello_world/hello_world.o 00:02:01.883 CC test/nvme/err_injection/err_injection.o 00:02:01.883 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:01.883 CC examples/bdev/hello_world/hello_bdev.o 00:02:01.883 CC test/blobfs/mkfs/mkfs.o 00:02:01.883 CC examples/nvme/reconnect/reconnect.o 00:02:01.883 CC test/thread/poller_perf/poller_perf.o 00:02:01.883 CC test/nvme/e2edp/nvme_dp.o 00:02:01.883 CC examples/vmd/lsvmd/lsvmd.o 00:02:01.883 CC test/nvme/overhead/overhead.o 00:02:01.883 CC test/accel/dif/dif.o 00:02:01.883 CC examples/nvme/arbitration/arbitration.o 00:02:01.883 CC test/nvme/fused_ordering/fused_ordering.o 00:02:01.883 CC test/nvme/boot_partition/boot_partition.o 00:02:01.883 CC app/fio/nvme/fio_plugin.o 00:02:01.883 CC test/event/event_perf/event_perf.o 00:02:01.883 CC examples/nvme/abort/abort.o 00:02:01.883 CC test/event/reactor/reactor.o 00:02:01.883 CC examples/blob/cli/blobcli.o 00:02:01.883 CC test/nvme/compliance/nvme_compliance.o 00:02:01.883 CC test/nvme/cuse/cuse.o 00:02:01.883 CC test/app/bdev_svc/bdev_svc.o 00:02:01.883 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:01.883 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:01.883 CC examples/bdev/bdevperf/bdevperf.o 00:02:01.883 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:01.883 CC test/nvme/connect_stress/connect_stress.o 00:02:01.883 CC test/dma/test_dma/test_dma.o 00:02:01.883 CC examples/blob/hello_world/hello_blob.o 00:02:01.883 CC test/event/scheduler/scheduler.o 00:02:01.883 CC examples/thread/thread/thread_ex.o 00:02:01.883 CC test/bdev/bdevio/bdevio.o 00:02:01.883 CC app/fio/bdev/fio_plugin.o 00:02:01.883 CC test/event/app_repeat/app_repeat.o 00:02:02.153 CC examples/nvmf/nvmf/nvmf.o 00:02:02.153 LINK spdk_lspci 00:02:02.153 LINK rpc_client_test 00:02:02.153 CC test/env/mem_callbacks/mem_callbacks.o 00:02:02.417 LINK interrupt_tgt 00:02:02.417 LINK spdk_nvme_discover 00:02:02.417 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:02.417 LINK vhost 00:02:02.417 LINK nvmf_tgt 00:02:02.417 CC test/lvol/esnap/esnap.o 00:02:02.417 LINK spdk_trace_record 00:02:02.417 LINK jsoncat 00:02:02.417 LINK vtophys 00:02:02.417 LINK zipf 00:02:02.417 LINK env_dpdk_post_init 00:02:02.417 CXX test/cpp_headers/iscsi_spec.o 00:02:02.417 LINK reactor 00:02:02.417 LINK lsvmd 00:02:02.417 CXX test/cpp_headers/json.o 00:02:02.417 LINK reactor_perf 00:02:02.417 CXX test/cpp_headers/jsonrpc.o 00:02:02.417 CXX test/cpp_headers/keyring.o 00:02:02.417 LINK histogram_perf 00:02:02.417 CXX test/cpp_headers/keyring_module.o 00:02:02.417 LINK led 00:02:02.417 LINK stub 00:02:02.417 CXX test/cpp_headers/likely.o 00:02:02.417 CXX test/cpp_headers/log.o 00:02:02.417 LINK spdk_tgt 00:02:02.417 CXX test/cpp_headers/lvol.o 00:02:02.417 CXX test/cpp_headers/memory.o 00:02:02.417 LINK event_perf 00:02:02.417 CXX test/cpp_headers/mmio.o 00:02:02.417 CXX test/cpp_headers/nbd.o 00:02:02.417 CXX test/cpp_headers/notify.o 00:02:02.417 LINK iscsi_tgt 00:02:02.417 CXX test/cpp_headers/nvme_intel.o 00:02:02.417 CXX test/cpp_headers/nvme.o 00:02:02.417 CXX test/cpp_headers/nvme_ocssd.o 00:02:02.417 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:02.417 CXX test/cpp_headers/nvme_spec.o 00:02:02.417 CXX test/cpp_headers/nvme_zns.o 00:02:02.417 LINK poller_perf 00:02:02.417 CXX test/cpp_headers/nvmf_cmd.o 00:02:02.417 LINK startup 00:02:02.417 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:02.417 CXX test/cpp_headers/nvmf.o 00:02:02.417 CXX test/cpp_headers/nvmf_spec.o 00:02:02.417 CXX test/cpp_headers/nvmf_transport.o 00:02:02.417 CXX test/cpp_headers/opal.o 00:02:02.417 CXX test/cpp_headers/opal_spec.o 00:02:02.417 CXX test/cpp_headers/pci_ids.o 00:02:02.417 CXX test/cpp_headers/pipe.o 00:02:02.417 LINK cmb_copy 00:02:02.417 CXX test/cpp_headers/queue.o 00:02:02.417 CXX test/cpp_headers/reduce.o 00:02:02.417 CXX test/cpp_headers/rpc.o 00:02:02.417 LINK pmr_persistence 00:02:02.417 CXX test/cpp_headers/scheduler.o 00:02:02.417 CXX test/cpp_headers/scsi.o 00:02:02.417 LINK bdev_svc 00:02:02.417 LINK boot_partition 00:02:02.417 LINK err_injection 00:02:02.417 LINK app_repeat 00:02:02.417 CXX test/cpp_headers/scsi_spec.o 00:02:02.688 CXX test/cpp_headers/sock.o 00:02:02.688 LINK reserve 00:02:02.688 CXX test/cpp_headers/stdinc.o 00:02:02.688 LINK connect_stress 00:02:02.688 LINK ioat_perf 00:02:02.688 LINK mkfs 00:02:02.688 CXX test/cpp_headers/string.o 00:02:02.688 LINK doorbell_aers 00:02:02.688 LINK verify 00:02:02.688 LINK fused_ordering 00:02:02.688 LINK hello_bdev 00:02:02.688 LINK simple_copy 00:02:02.688 CXX test/cpp_headers/thread.o 00:02:02.688 LINK hello_world 00:02:02.688 CXX test/cpp_headers/trace.o 00:02:02.688 LINK reset 00:02:02.688 LINK hotplug 00:02:02.688 CXX test/cpp_headers/trace_parser.o 00:02:02.688 LINK hello_sock 00:02:02.688 LINK scheduler 00:02:02.688 LINK thread 00:02:02.688 LINK hello_blob 00:02:02.688 LINK sgl 00:02:02.688 CXX test/cpp_headers/tree.o 00:02:02.688 LINK spdk_dd 00:02:02.688 LINK nvme_dp 00:02:02.688 LINK overhead 00:02:02.688 LINK aer 00:02:02.688 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:02.688 CXX test/cpp_headers/ublk.o 00:02:02.688 CXX test/cpp_headers/util.o 00:02:02.688 CXX test/cpp_headers/uuid.o 00:02:02.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:02.688 LINK spdk_trace 00:02:02.688 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:02.688 LINK reconnect 00:02:02.688 LINK fdp 00:02:02.688 LINK idxd_perf 00:02:02.688 CXX test/cpp_headers/vfio_user_pci.o 00:02:02.688 CXX test/cpp_headers/version.o 00:02:02.949 LINK nvme_compliance 00:02:02.949 CXX test/cpp_headers/vfio_user_spec.o 00:02:02.949 LINK nvmf 00:02:02.949 LINK arbitration 00:02:02.949 CXX test/cpp_headers/vhost.o 00:02:02.949 LINK pci_ut 00:02:02.949 CXX test/cpp_headers/xor.o 00:02:02.949 CXX test/cpp_headers/vmd.o 00:02:02.949 CXX test/cpp_headers/zipf.o 00:02:02.949 LINK accel_perf 00:02:02.949 LINK abort 00:02:02.949 LINK dif 00:02:02.949 LINK test_dma 00:02:02.949 LINK bdevio 00:02:02.949 LINK blobcli 00:02:03.208 LINK nvme_manage 00:02:03.208 LINK spdk_nvme 00:02:03.208 LINK spdk_bdev 00:02:03.208 LINK spdk_nvme_perf 00:02:03.208 LINK nvme_fuzz 00:02:03.208 LINK spdk_nvme_identify 00:02:03.208 LINK spdk_top 00:02:03.208 LINK mem_callbacks 00:02:03.467 LINK memory_ut 00:02:03.467 LINK vhost_fuzz 00:02:03.467 LINK bdevperf 00:02:03.467 LINK cuse 00:02:04.403 LINK iscsi_fuzz 00:02:06.304 LINK esnap 00:02:06.304 00:02:06.304 real 0m48.166s 00:02:06.304 user 6m36.070s 00:02:06.304 sys 4m22.073s 00:02:06.304 01:04:41 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:06.304 01:04:41 make -- common/autotest_common.sh@10 -- $ set +x 00:02:06.304 ************************************ 00:02:06.304 END TEST make 00:02:06.304 ************************************ 00:02:06.304 01:04:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:06.304 01:04:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:06.305 01:04:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:06.305 01:04:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.305 01:04:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:06.305 01:04:41 -- pm/common@44 -- $ pid=3800711 00:02:06.305 01:04:41 -- pm/common@50 -- $ kill -TERM 3800711 00:02:06.305 01:04:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.305 01:04:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:06.305 01:04:41 -- pm/common@44 -- $ pid=3800713 00:02:06.305 01:04:41 -- pm/common@50 -- $ kill -TERM 3800713 00:02:06.305 01:04:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.305 01:04:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:06.305 01:04:41 -- pm/common@44 -- $ pid=3800714 00:02:06.305 01:04:41 -- pm/common@50 -- $ kill -TERM 3800714 00:02:06.305 01:04:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.305 01:04:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:06.305 01:04:41 -- pm/common@44 -- $ pid=3800746 00:02:06.305 01:04:41 -- pm/common@50 -- $ sudo -E kill -TERM 3800746 00:02:06.564 01:04:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:06.564 01:04:42 -- nvmf/common.sh@7 -- # uname -s 00:02:06.564 01:04:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:06.564 01:04:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:06.564 01:04:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:06.564 01:04:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:06.564 01:04:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:06.564 01:04:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:06.564 01:04:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:06.564 01:04:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:06.564 01:04:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:06.564 01:04:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:06.564 01:04:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:06.564 01:04:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:06.564 01:04:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:06.564 01:04:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:06.564 01:04:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:06.564 01:04:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:06.565 01:04:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:06.565 01:04:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:06.565 01:04:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:06.565 01:04:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:06.565 01:04:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.565 01:04:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.565 01:04:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.565 01:04:42 -- paths/export.sh@5 -- # export PATH 00:02:06.565 01:04:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:06.565 01:04:42 -- nvmf/common.sh@47 -- # : 0 00:02:06.565 01:04:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:06.565 01:04:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:06.565 01:04:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:06.565 01:04:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:06.565 01:04:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:06.565 01:04:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:06.565 01:04:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:06.565 01:04:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:06.565 01:04:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:06.565 01:04:42 -- spdk/autotest.sh@32 -- # uname -s 00:02:06.565 01:04:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:06.565 01:04:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:06.565 01:04:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.565 01:04:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:06.565 01:04:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:06.565 01:04:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:06.565 01:04:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:06.565 01:04:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:06.565 01:04:42 -- spdk/autotest.sh@48 -- # udevadm_pid=3860695 00:02:06.565 01:04:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:06.565 01:04:42 -- pm/common@17 -- # local monitor 00:02:06.565 01:04:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.565 01:04:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:06.565 01:04:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.565 01:04:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.565 01:04:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:06.565 01:04:42 -- pm/common@25 -- # sleep 1 00:02:06.565 01:04:42 -- pm/common@21 -- # date +%s 00:02:06.565 01:04:42 -- pm/common@21 -- # date +%s 00:02:06.565 01:04:42 -- pm/common@21 -- # date +%s 00:02:06.565 01:04:42 -- pm/common@21 -- # date +%s 00:02:06.565 01:04:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715727882 00:02:06.565 01:04:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715727882 00:02:06.565 01:04:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715727882 00:02:06.565 01:04:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715727882 00:02:06.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715727882_collect-vmstat.pm.log 00:02:06.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715727882_collect-cpu-load.pm.log 00:02:06.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715727882_collect-cpu-temp.pm.log 00:02:06.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715727882_collect-bmc-pm.bmc.pm.log 00:02:07.504 01:04:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:07.504 01:04:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:07.504 01:04:43 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:07.504 01:04:43 -- common/autotest_common.sh@10 -- # set +x 00:02:07.504 01:04:43 -- spdk/autotest.sh@59 -- # create_test_list 00:02:07.504 01:04:43 -- common/autotest_common.sh@744 -- # xtrace_disable 00:02:07.504 01:04:43 -- common/autotest_common.sh@10 -- # set +x 00:02:07.504 01:04:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:07.504 01:04:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.504 01:04:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.504 01:04:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:07.504 01:04:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.504 01:04:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:07.504 01:04:43 -- common/autotest_common.sh@1451 -- # uname 00:02:07.504 01:04:43 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:02:07.504 01:04:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:07.504 01:04:43 -- common/autotest_common.sh@1471 -- # uname 00:02:07.504 01:04:43 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:02:07.504 01:04:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:07.504 01:04:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:07.504 01:04:43 -- spdk/autotest.sh@72 -- # hash lcov 00:02:07.504 01:04:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:07.504 01:04:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:07.504 --rc lcov_branch_coverage=1 00:02:07.504 --rc lcov_function_coverage=1 00:02:07.504 --rc genhtml_branch_coverage=1 00:02:07.504 --rc genhtml_function_coverage=1 00:02:07.504 --rc genhtml_legend=1 00:02:07.504 --rc geninfo_all_blocks=1 00:02:07.504 ' 00:02:07.504 01:04:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:07.504 --rc lcov_branch_coverage=1 00:02:07.504 --rc lcov_function_coverage=1 00:02:07.504 --rc genhtml_branch_coverage=1 00:02:07.504 --rc genhtml_function_coverage=1 00:02:07.504 --rc genhtml_legend=1 00:02:07.504 --rc geninfo_all_blocks=1 00:02:07.504 ' 00:02:07.504 01:04:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:07.504 --rc lcov_branch_coverage=1 00:02:07.504 --rc lcov_function_coverage=1 00:02:07.504 --rc genhtml_branch_coverage=1 00:02:07.504 --rc genhtml_function_coverage=1 00:02:07.504 --rc genhtml_legend=1 00:02:07.504 --rc geninfo_all_blocks=1 00:02:07.504 --no-external' 00:02:07.504 01:04:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:07.504 --rc lcov_branch_coverage=1 00:02:07.504 --rc lcov_function_coverage=1 00:02:07.504 --rc genhtml_branch_coverage=1 00:02:07.504 --rc genhtml_function_coverage=1 00:02:07.504 --rc genhtml_legend=1 00:02:07.504 --rc geninfo_all_blocks=1 00:02:07.504 --no-external' 00:02:07.504 01:04:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:07.763 lcov: LCOV version 1.14 00:02:07.763 01:04:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:17.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:17.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:17.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:17.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:17.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:17.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:17.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:17.743 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:29.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:29.955 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:30.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:30.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:30.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:30.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:30.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:30.269 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:30.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:30.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:30.270 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:30.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:30.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:30.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:30.529 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:30.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:30.530 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:31.907 01:05:07 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:31.907 01:05:07 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:31.907 01:05:07 -- common/autotest_common.sh@10 -- # set +x 00:02:31.907 01:05:07 -- spdk/autotest.sh@91 -- # rm -f 00:02:31.907 01:05:07 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:35.197 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:35.197 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:02:35.197 01:05:10 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:35.197 01:05:10 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:35.197 01:05:10 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:35.197 01:05:10 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:35.197 01:05:10 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:35.197 01:05:10 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:35.197 01:05:10 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:35.197 01:05:10 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:35.197 01:05:10 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:35.197 01:05:10 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:35.197 01:05:10 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:35.197 01:05:10 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:35.197 01:05:10 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:35.197 01:05:10 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:35.197 01:05:10 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:35.197 No valid GPT data, bailing 00:02:35.197 01:05:10 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:35.197 01:05:10 -- scripts/common.sh@391 -- # pt= 00:02:35.197 01:05:10 -- scripts/common.sh@392 -- # return 1 00:02:35.197 01:05:10 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:35.197 1+0 records in 00:02:35.197 1+0 records out 00:02:35.197 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647243 s, 162 MB/s 00:02:35.197 01:05:10 -- spdk/autotest.sh@118 -- # sync 00:02:35.197 01:05:10 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:35.197 01:05:10 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:35.197 01:05:10 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:41.766 01:05:17 -- spdk/autotest.sh@124 -- # uname -s 00:02:41.766 01:05:17 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:41.766 01:05:17 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:41.766 01:05:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.766 01:05:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.766 01:05:17 -- common/autotest_common.sh@10 -- # set +x 00:02:41.766 ************************************ 00:02:41.766 START TEST setup.sh 00:02:41.766 ************************************ 00:02:41.766 01:05:17 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:41.766 * Looking for test storage... 00:02:41.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.766 01:05:17 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:41.766 01:05:17 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:41.766 01:05:17 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:41.766 01:05:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:41.766 01:05:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:41.766 01:05:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:41.766 ************************************ 00:02:41.766 START TEST acl 00:02:41.766 ************************************ 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:41.766 * Looking for test storage... 00:02:41.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:41.766 01:05:17 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:41.766 01:05:17 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:41.766 01:05:17 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:41.766 01:05:17 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:41.766 01:05:17 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:41.766 01:05:17 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:41.766 01:05:17 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:41.766 01:05:17 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:41.766 01:05:17 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.057 01:05:20 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:45.057 01:05:20 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:45.057 01:05:20 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:45.057 01:05:20 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:45.057 01:05:20 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.057 01:05:20 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:48.427 Hugepages 00:02:48.427 node hugesize free / total 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.427 00:02:48.427 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.427 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:48.428 01:05:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:48.428 01:05:23 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:48.428 01:05:23 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:48.428 01:05:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:48.428 ************************************ 00:02:48.428 START TEST denied 00:02:48.428 ************************************ 00:02:48.428 01:05:23 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:48.428 01:05:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:02:48.428 01:05:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:48.428 01:05:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:02:48.428 01:05:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.428 01:05:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:51.715 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:02:51.715 01:05:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:02:51.715 01:05:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:51.715 01:05:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:51.715 01:05:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:02:51.716 01:05:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:02:51.716 01:05:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:51.716 01:05:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:51.716 01:05:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:51.716 01:05:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.716 01:05:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.988 00:02:56.988 real 0m7.961s 00:02:56.988 user 0m2.519s 00:02:56.988 sys 0m4.791s 00:02:56.988 01:05:31 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:56.988 01:05:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:56.988 ************************************ 00:02:56.988 END TEST denied 00:02:56.988 ************************************ 00:02:56.988 01:05:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:56.988 01:05:31 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:56.988 01:05:31 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:56.988 01:05:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:56.988 ************************************ 00:02:56.988 START TEST allowed 00:02:56.988 ************************************ 00:02:56.988 01:05:31 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:56.988 01:05:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:02:56.988 01:05:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:56.988 01:05:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:02:56.988 01:05:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.988 01:05:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:01.182 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:01.182 01:05:36 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:01.182 01:05:36 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:01.182 01:05:36 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:01.182 01:05:36 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.182 01:05:36 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.471 00:03:04.471 real 0m8.171s 00:03:04.471 user 0m2.095s 00:03:04.471 sys 0m4.406s 00:03:04.471 01:05:39 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.471 01:05:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:04.471 ************************************ 00:03:04.471 END TEST allowed 00:03:04.471 ************************************ 00:03:04.471 00:03:04.471 real 0m22.745s 00:03:04.471 user 0m6.762s 00:03:04.471 sys 0m13.809s 00:03:04.471 01:05:39 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:04.471 01:05:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.471 ************************************ 00:03:04.471 END TEST acl 00:03:04.471 ************************************ 00:03:04.471 01:05:40 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.471 01:05:40 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.471 01:05:40 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.471 01:05:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:04.471 ************************************ 00:03:04.471 START TEST hugepages 00:03:04.471 ************************************ 00:03:04.471 01:05:40 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:04.731 * Looking for test storage... 00:03:04.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 37934776 kB' 'MemAvailable: 42606432 kB' 'Buffers: 2696 kB' 'Cached: 14287420 kB' 'SwapCached: 0 kB' 'Active: 10330544 kB' 'Inactive: 4455220 kB' 'Active(anon): 9764092 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499480 kB' 'Mapped: 207232 kB' 'Shmem: 9268444 kB' 'KReclaimable: 294248 kB' 'Slab: 931060 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 636812 kB' 'KernelStack: 22016 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439056 kB' 'Committed_AS: 11119648 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216440 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.732 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:04.733 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:04.734 01:05:40 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:04.734 01:05:40 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:04.734 01:05:40 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:04.734 01:05:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.734 ************************************ 00:03:04.734 START TEST default_setup 00:03:04.734 ************************************ 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.734 01:05:40 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.023 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.023 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.283 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:09.662 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.662 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40110140 kB' 'MemAvailable: 44781796 kB' 'Buffers: 2696 kB' 'Cached: 14287544 kB' 'SwapCached: 0 kB' 'Active: 10341232 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774780 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509688 kB' 'Mapped: 207396 kB' 'Shmem: 9268568 kB' 'KReclaimable: 294248 kB' 'Slab: 929136 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634888 kB' 'KernelStack: 22128 kB' 'PageTables: 8704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11131812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.663 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40110808 kB' 'MemAvailable: 44782464 kB' 'Buffers: 2696 kB' 'Cached: 14287548 kB' 'SwapCached: 0 kB' 'Active: 10341252 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774800 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509768 kB' 'Mapped: 207372 kB' 'Shmem: 9268572 kB' 'KReclaimable: 294248 kB' 'Slab: 929136 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634888 kB' 'KernelStack: 22160 kB' 'PageTables: 9088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11131832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216632 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.664 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.928 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40113500 kB' 'MemAvailable: 44785156 kB' 'Buffers: 2696 kB' 'Cached: 14287564 kB' 'SwapCached: 0 kB' 'Active: 10340672 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774220 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509096 kB' 'Mapped: 207296 kB' 'Shmem: 9268588 kB' 'KReclaimable: 294248 kB' 'Slab: 929132 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634884 kB' 'KernelStack: 22224 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11131852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.929 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.930 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.931 nr_hugepages=1024 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.931 resv_hugepages=0 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.931 surplus_hugepages=0 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.931 anon_hugepages=0 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40114016 kB' 'MemAvailable: 44785672 kB' 'Buffers: 2696 kB' 'Cached: 14287584 kB' 'SwapCached: 0 kB' 'Active: 10341024 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774572 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509332 kB' 'Mapped: 207296 kB' 'Shmem: 9268608 kB' 'KReclaimable: 294248 kB' 'Slab: 929132 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634884 kB' 'KernelStack: 22160 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11131872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216616 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.931 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.932 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18422008 kB' 'MemUsed: 14217132 kB' 'SwapCached: 0 kB' 'Active: 6660428 kB' 'Inactive: 4306116 kB' 'Active(anon): 6371364 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656184 kB' 'Mapped: 102508 kB' 'AnonPages: 312960 kB' 'Shmem: 6061004 kB' 'KernelStack: 13800 kB' 'PageTables: 5576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511260 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 314028 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.933 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.934 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:09.935 node0=1024 expecting 1024 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:09.935 00:03:09.935 real 0m5.209s 00:03:09.935 user 0m1.423s 00:03:09.935 sys 0m2.348s 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:09.935 01:05:45 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:09.935 ************************************ 00:03:09.935 END TEST default_setup 00:03:09.935 ************************************ 00:03:09.935 01:05:45 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:09.935 01:05:45 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:09.935 01:05:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:09.935 01:05:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.935 ************************************ 00:03:09.935 START TEST per_node_1G_alloc 00:03:09.935 ************************************ 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.935 01:05:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.230 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:13.230 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40085924 kB' 'MemAvailable: 44757580 kB' 'Buffers: 2696 kB' 'Cached: 14287688 kB' 'SwapCached: 0 kB' 'Active: 10341112 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774660 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509276 kB' 'Mapped: 206300 kB' 'Shmem: 9268712 kB' 'KReclaimable: 294248 kB' 'Slab: 929396 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635148 kB' 'KernelStack: 22080 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216616 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.230 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.231 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40086456 kB' 'MemAvailable: 44758112 kB' 'Buffers: 2696 kB' 'Cached: 14287692 kB' 'SwapCached: 0 kB' 'Active: 10340364 kB' 'Inactive: 4455220 kB' 'Active(anon): 9773912 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508516 kB' 'Mapped: 206216 kB' 'Shmem: 9268716 kB' 'KReclaimable: 294248 kB' 'Slab: 929376 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635128 kB' 'KernelStack: 22048 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.232 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.233 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.234 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40086848 kB' 'MemAvailable: 44758504 kB' 'Buffers: 2696 kB' 'Cached: 14287708 kB' 'SwapCached: 0 kB' 'Active: 10340380 kB' 'Inactive: 4455220 kB' 'Active(anon): 9773928 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508520 kB' 'Mapped: 206216 kB' 'Shmem: 9268732 kB' 'KReclaimable: 294248 kB' 'Slab: 929376 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635128 kB' 'KernelStack: 22048 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.235 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.236 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:13.237 nr_hugepages=1024 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.237 resv_hugepages=0 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.237 surplus_hugepages=0 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.237 anon_hugepages=0 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40086092 kB' 'MemAvailable: 44757748 kB' 'Buffers: 2696 kB' 'Cached: 14287708 kB' 'SwapCached: 0 kB' 'Active: 10340380 kB' 'Inactive: 4455220 kB' 'Active(anon): 9773928 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508520 kB' 'Mapped: 206216 kB' 'Shmem: 9268732 kB' 'KReclaimable: 294248 kB' 'Slab: 929376 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635128 kB' 'KernelStack: 22048 kB' 'PageTables: 8776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.237 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.238 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19437276 kB' 'MemUsed: 13201864 kB' 'SwapCached: 0 kB' 'Active: 6660428 kB' 'Inactive: 4306116 kB' 'Active(anon): 6371364 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656180 kB' 'Mapped: 101436 kB' 'AnonPages: 313584 kB' 'Shmem: 6061000 kB' 'KernelStack: 13624 kB' 'PageTables: 5076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511752 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 314520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.239 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.240 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.501 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20649192 kB' 'MemUsed: 7006876 kB' 'SwapCached: 0 kB' 'Active: 3679656 kB' 'Inactive: 149104 kB' 'Active(anon): 3402268 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 149104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3634288 kB' 'Mapped: 104780 kB' 'AnonPages: 194536 kB' 'Shmem: 3207796 kB' 'KernelStack: 8408 kB' 'PageTables: 3648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97016 kB' 'Slab: 417624 kB' 'SReclaimable: 97016 kB' 'SUnreclaim: 320608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.502 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:13.503 node0=512 expecting 512 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:13.503 node1=512 expecting 512 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:13.503 00:03:13.503 real 0m3.373s 00:03:13.503 user 0m1.240s 00:03:13.503 sys 0m2.160s 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:13.503 01:05:48 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.503 ************************************ 00:03:13.503 END TEST per_node_1G_alloc 00:03:13.503 ************************************ 00:03:13.503 01:05:48 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:13.503 01:05:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:13.503 01:05:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:13.503 01:05:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.503 ************************************ 00:03:13.503 START TEST even_2G_alloc 00:03:13.503 ************************************ 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.503 01:05:49 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.798 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:16.798 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.798 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40059632 kB' 'MemAvailable: 44731288 kB' 'Buffers: 2696 kB' 'Cached: 14287856 kB' 'SwapCached: 0 kB' 'Active: 10340968 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774516 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508908 kB' 'Mapped: 206236 kB' 'Shmem: 9268880 kB' 'KReclaimable: 294248 kB' 'Slab: 928688 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634440 kB' 'KernelStack: 22048 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216584 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.799 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40058800 kB' 'MemAvailable: 44730456 kB' 'Buffers: 2696 kB' 'Cached: 14287860 kB' 'SwapCached: 0 kB' 'Active: 10340800 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774348 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508772 kB' 'Mapped: 206220 kB' 'Shmem: 9268884 kB' 'KReclaimable: 294248 kB' 'Slab: 928720 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634472 kB' 'KernelStack: 22048 kB' 'PageTables: 8804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.800 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.801 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40058840 kB' 'MemAvailable: 44730496 kB' 'Buffers: 2696 kB' 'Cached: 14287860 kB' 'SwapCached: 0 kB' 'Active: 10340268 kB' 'Inactive: 4455220 kB' 'Active(anon): 9773816 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 508192 kB' 'Mapped: 206220 kB' 'Shmem: 9268884 kB' 'KReclaimable: 294248 kB' 'Slab: 928720 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634472 kB' 'KernelStack: 22016 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.802 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.803 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.804 nr_hugepages=1024 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.804 resv_hugepages=0 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.804 surplus_hugepages=0 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.804 anon_hugepages=0 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.804 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40059940 kB' 'MemAvailable: 44731596 kB' 'Buffers: 2696 kB' 'Cached: 14287864 kB' 'SwapCached: 0 kB' 'Active: 10340072 kB' 'Inactive: 4455220 kB' 'Active(anon): 9773620 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507992 kB' 'Mapped: 206220 kB' 'Shmem: 9268888 kB' 'KReclaimable: 294248 kB' 'Slab: 928720 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634472 kB' 'KernelStack: 21984 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11122512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.805 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.070 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.071 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19422568 kB' 'MemUsed: 13216572 kB' 'SwapCached: 0 kB' 'Active: 6661168 kB' 'Inactive: 4306116 kB' 'Active(anon): 6372104 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656280 kB' 'Mapped: 101440 kB' 'AnonPages: 314204 kB' 'Shmem: 6061100 kB' 'KernelStack: 13656 kB' 'PageTables: 5116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511096 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 313864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.072 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20646968 kB' 'MemUsed: 7009100 kB' 'SwapCached: 0 kB' 'Active: 3679412 kB' 'Inactive: 149104 kB' 'Active(anon): 3402024 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 149104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3634364 kB' 'Mapped: 104780 kB' 'AnonPages: 194220 kB' 'Shmem: 3207872 kB' 'KernelStack: 8376 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97016 kB' 'Slab: 417624 kB' 'SReclaimable: 97016 kB' 'SUnreclaim: 320608 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.073 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.074 node0=512 expecting 512 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.074 node1=512 expecting 512 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.074 00:03:17.074 real 0m3.522s 00:03:17.074 user 0m1.295s 00:03:17.074 sys 0m2.266s 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:17.074 01:05:52 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.074 ************************************ 00:03:17.074 END TEST even_2G_alloc 00:03:17.074 ************************************ 00:03:17.074 01:05:52 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.074 01:05:52 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:17.074 01:05:52 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:17.074 01:05:52 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.074 ************************************ 00:03:17.074 START TEST odd_alloc 00:03:17.074 ************************************ 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.074 01:05:52 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.435 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:20.435 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40081520 kB' 'MemAvailable: 44753176 kB' 'Buffers: 2696 kB' 'Cached: 14288016 kB' 'SwapCached: 0 kB' 'Active: 10341652 kB' 'Inactive: 4455220 kB' 'Active(anon): 9775200 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509416 kB' 'Mapped: 206272 kB' 'Shmem: 9269040 kB' 'KReclaimable: 294248 kB' 'Slab: 928744 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634496 kB' 'KernelStack: 22016 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11123616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.435 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.436 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40081664 kB' 'MemAvailable: 44753320 kB' 'Buffers: 2696 kB' 'Cached: 14288020 kB' 'SwapCached: 0 kB' 'Active: 10341616 kB' 'Inactive: 4455220 kB' 'Active(anon): 9775164 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509456 kB' 'Mapped: 206232 kB' 'Shmem: 9269044 kB' 'KReclaimable: 294248 kB' 'Slab: 928844 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634596 kB' 'KernelStack: 22064 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11123632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.437 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.438 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40082320 kB' 'MemAvailable: 44753976 kB' 'Buffers: 2696 kB' 'Cached: 14288036 kB' 'SwapCached: 0 kB' 'Active: 10341348 kB' 'Inactive: 4455220 kB' 'Active(anon): 9774896 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 509152 kB' 'Mapped: 206232 kB' 'Shmem: 9269060 kB' 'KReclaimable: 294248 kB' 'Slab: 928836 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634588 kB' 'KernelStack: 22016 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11123656 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216488 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.439 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:20.440 nr_hugepages=1025 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.440 resv_hugepages=0 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.440 surplus_hugepages=0 00:03:20.440 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.440 anon_hugepages=0 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40083700 kB' 'MemAvailable: 44755356 kB' 'Buffers: 2696 kB' 'Cached: 14288056 kB' 'SwapCached: 0 kB' 'Active: 10342940 kB' 'Inactive: 4455220 kB' 'Active(anon): 9776488 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510880 kB' 'Mapped: 206240 kB' 'Shmem: 9269080 kB' 'KReclaimable: 294248 kB' 'Slab: 928836 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634588 kB' 'KernelStack: 22048 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486608 kB' 'Committed_AS: 11140176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216456 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.441 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19435464 kB' 'MemUsed: 13203676 kB' 'SwapCached: 0 kB' 'Active: 6660448 kB' 'Inactive: 4306116 kB' 'Active(anon): 6371384 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656312 kB' 'Mapped: 101452 kB' 'AnonPages: 313424 kB' 'Shmem: 6061132 kB' 'KernelStack: 13688 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511224 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 313992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.442 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.443 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 20650276 kB' 'MemUsed: 7005792 kB' 'SwapCached: 0 kB' 'Active: 3680480 kB' 'Inactive: 149104 kB' 'Active(anon): 3403092 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 149104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3634480 kB' 'Mapped: 104788 kB' 'AnonPages: 195328 kB' 'Shmem: 3207988 kB' 'KernelStack: 8360 kB' 'PageTables: 3544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97016 kB' 'Slab: 417608 kB' 'SReclaimable: 97016 kB' 'SUnreclaim: 320592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.444 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:20.445 node0=512 expecting 513 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:20.445 node1=513 expecting 512 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:20.445 00:03:20.445 real 0m3.280s 00:03:20.445 user 0m1.193s 00:03:20.445 sys 0m2.103s 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:20.445 01:05:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.445 ************************************ 00:03:20.445 END TEST odd_alloc 00:03:20.445 ************************************ 00:03:20.445 01:05:55 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:20.445 01:05:55 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:20.445 01:05:55 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:20.445 01:05:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.445 ************************************ 00:03:20.445 START TEST custom_alloc 00:03:20.445 ************************************ 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:20.445 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.446 01:05:56 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.736 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:23.736 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.736 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:23.736 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:23.736 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.736 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39061368 kB' 'MemAvailable: 43733024 kB' 'Buffers: 2696 kB' 'Cached: 14288184 kB' 'SwapCached: 0 kB' 'Active: 10342916 kB' 'Inactive: 4455220 kB' 'Active(anon): 9776464 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510372 kB' 'Mapped: 206272 kB' 'Shmem: 9269208 kB' 'KReclaimable: 294248 kB' 'Slab: 929140 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 634892 kB' 'KernelStack: 22096 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11124072 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.737 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.738 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39061668 kB' 'MemAvailable: 43733324 kB' 'Buffers: 2696 kB' 'Cached: 14288200 kB' 'SwapCached: 0 kB' 'Active: 10342904 kB' 'Inactive: 4455220 kB' 'Active(anon): 9776452 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510472 kB' 'Mapped: 206240 kB' 'Shmem: 9269224 kB' 'KReclaimable: 294248 kB' 'Slab: 929408 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635160 kB' 'KernelStack: 22032 kB' 'PageTables: 8772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11124588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.003 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.004 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39062328 kB' 'MemAvailable: 43733984 kB' 'Buffers: 2696 kB' 'Cached: 14288216 kB' 'SwapCached: 0 kB' 'Active: 10343044 kB' 'Inactive: 4455220 kB' 'Active(anon): 9776592 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510592 kB' 'Mapped: 206240 kB' 'Shmem: 9269240 kB' 'KReclaimable: 294248 kB' 'Slab: 929404 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635156 kB' 'KernelStack: 22032 kB' 'PageTables: 8780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11124612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.005 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.006 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:24.007 nr_hugepages=1536 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.007 resv_hugepages=0 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.007 surplus_hugepages=0 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.007 anon_hugepages=0 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39062596 kB' 'MemAvailable: 43734252 kB' 'Buffers: 2696 kB' 'Cached: 14288256 kB' 'SwapCached: 0 kB' 'Active: 10342696 kB' 'Inactive: 4455220 kB' 'Active(anon): 9776244 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 510216 kB' 'Mapped: 206240 kB' 'Shmem: 9269280 kB' 'KReclaimable: 294248 kB' 'Slab: 929404 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635156 kB' 'KernelStack: 22016 kB' 'PageTables: 8728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963344 kB' 'Committed_AS: 11124632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216504 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.007 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.008 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 19440280 kB' 'MemUsed: 13198860 kB' 'SwapCached: 0 kB' 'Active: 6661308 kB' 'Inactive: 4306116 kB' 'Active(anon): 6372244 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656396 kB' 'Mapped: 101460 kB' 'AnonPages: 314184 kB' 'Shmem: 6061216 kB' 'KernelStack: 13640 kB' 'PageTables: 5076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511640 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 314408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.009 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656068 kB' 'MemFree: 19622316 kB' 'MemUsed: 8033752 kB' 'SwapCached: 0 kB' 'Active: 3681412 kB' 'Inactive: 149104 kB' 'Active(anon): 3404024 kB' 'Inactive(anon): 0 kB' 'Active(file): 277388 kB' 'Inactive(file): 149104 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3634576 kB' 'Mapped: 104780 kB' 'AnonPages: 196028 kB' 'Shmem: 3208084 kB' 'KernelStack: 8376 kB' 'PageTables: 3652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 97016 kB' 'Slab: 417764 kB' 'SReclaimable: 97016 kB' 'SUnreclaim: 320748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.010 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.011 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.012 node0=512 expecting 512 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:24.012 node1=1024 expecting 1024 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:24.012 00:03:24.012 real 0m3.582s 00:03:24.012 user 0m1.340s 00:03:24.012 sys 0m2.290s 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.012 01:05:59 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:24.012 ************************************ 00:03:24.012 END TEST custom_alloc 00:03:24.012 ************************************ 00:03:24.012 01:05:59 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:24.012 01:05:59 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.012 01:05:59 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.012 01:05:59 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.272 ************************************ 00:03:24.272 START TEST no_shrink_alloc 00:03:24.272 ************************************ 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.272 01:05:59 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.561 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.561 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40012868 kB' 'MemAvailable: 44684524 kB' 'Buffers: 2696 kB' 'Cached: 14288336 kB' 'SwapCached: 0 kB' 'Active: 10349888 kB' 'Inactive: 4455220 kB' 'Active(anon): 9783436 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516824 kB' 'Mapped: 207228 kB' 'Shmem: 9269360 kB' 'KReclaimable: 294248 kB' 'Slab: 929848 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635600 kB' 'KernelStack: 22080 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11133900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216572 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.561 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.562 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40014364 kB' 'MemAvailable: 44686020 kB' 'Buffers: 2696 kB' 'Cached: 14288340 kB' 'SwapCached: 0 kB' 'Active: 10349176 kB' 'Inactive: 4455220 kB' 'Active(anon): 9782724 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516568 kB' 'Mapped: 207144 kB' 'Shmem: 9269364 kB' 'KReclaimable: 294248 kB' 'Slab: 929784 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635536 kB' 'KernelStack: 22064 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11133920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216556 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.563 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.564 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40014980 kB' 'MemAvailable: 44686636 kB' 'Buffers: 2696 kB' 'Cached: 14288360 kB' 'SwapCached: 0 kB' 'Active: 10349716 kB' 'Inactive: 4455220 kB' 'Active(anon): 9783264 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517072 kB' 'Mapped: 207144 kB' 'Shmem: 9269384 kB' 'KReclaimable: 294248 kB' 'Slab: 929784 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635536 kB' 'KernelStack: 22064 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11134080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216556 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.565 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.567 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.568 nr_hugepages=1024 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.568 resv_hugepages=0 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.568 surplus_hugepages=0 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.568 anon_hugepages=0 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 40014728 kB' 'MemAvailable: 44686384 kB' 'Buffers: 2696 kB' 'Cached: 14288388 kB' 'SwapCached: 0 kB' 'Active: 10349464 kB' 'Inactive: 4455220 kB' 'Active(anon): 9783012 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516824 kB' 'Mapped: 207144 kB' 'Shmem: 9269412 kB' 'KReclaimable: 294248 kB' 'Slab: 929784 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635536 kB' 'KernelStack: 22096 kB' 'PageTables: 9000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11134468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216572 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.568 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.569 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18361356 kB' 'MemUsed: 14277784 kB' 'SwapCached: 0 kB' 'Active: 6667056 kB' 'Inactive: 4306116 kB' 'Active(anon): 6377992 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656492 kB' 'Mapped: 101648 kB' 'AnonPages: 319808 kB' 'Shmem: 6061312 kB' 'KernelStack: 13672 kB' 'PageTables: 5208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511860 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 314628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.570 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.571 node0=1024 expecting 1024 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.571 01:06:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.866 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.866 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.866 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.866 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.867 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.867 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39987336 kB' 'MemAvailable: 44658992 kB' 'Buffers: 2696 kB' 'Cached: 14288488 kB' 'SwapCached: 0 kB' 'Active: 10344692 kB' 'Inactive: 4455220 kB' 'Active(anon): 9778240 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512000 kB' 'Mapped: 206388 kB' 'Shmem: 9269512 kB' 'KReclaimable: 294248 kB' 'Slab: 930068 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635820 kB' 'KernelStack: 22096 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11126756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216552 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.867 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.868 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39987536 kB' 'MemAvailable: 44659192 kB' 'Buffers: 2696 kB' 'Cached: 14288492 kB' 'SwapCached: 0 kB' 'Active: 10344028 kB' 'Inactive: 4455220 kB' 'Active(anon): 9777576 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511364 kB' 'Mapped: 206268 kB' 'Shmem: 9269516 kB' 'KReclaimable: 294248 kB' 'Slab: 930052 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635804 kB' 'KernelStack: 22064 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11126132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.869 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39987404 kB' 'MemAvailable: 44659060 kB' 'Buffers: 2696 kB' 'Cached: 14288508 kB' 'SwapCached: 0 kB' 'Active: 10344044 kB' 'Inactive: 4455220 kB' 'Active(anon): 9777592 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 511356 kB' 'Mapped: 206268 kB' 'Shmem: 9269532 kB' 'KReclaimable: 294248 kB' 'Slab: 930052 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635804 kB' 'KernelStack: 22064 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11126680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216520 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.870 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.871 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.872 nr_hugepages=1024 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.872 resv_hugepages=0 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.872 surplus_hugepages=0 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.872 anon_hugepages=0 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.872 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295208 kB' 'MemFree: 39991436 kB' 'MemAvailable: 44663092 kB' 'Buffers: 2696 kB' 'Cached: 14288544 kB' 'SwapCached: 0 kB' 'Active: 10344788 kB' 'Inactive: 4455220 kB' 'Active(anon): 9778336 kB' 'Inactive(anon): 0 kB' 'Active(file): 566452 kB' 'Inactive(file): 4455220 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512140 kB' 'Mapped: 206268 kB' 'Shmem: 9269568 kB' 'KReclaimable: 294248 kB' 'Slab: 930052 kB' 'SReclaimable: 294248 kB' 'SUnreclaim: 635804 kB' 'KernelStack: 22080 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487632 kB' 'Committed_AS: 11127200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 216536 kB' 'VmallocChunk: 0 kB' 'Percpu: 99456 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3202420 kB' 'DirectMap2M: 18503680 kB' 'DirectMap1G: 47185920 kB' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.873 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 18369352 kB' 'MemUsed: 14269788 kB' 'SwapCached: 0 kB' 'Active: 6662212 kB' 'Inactive: 4306116 kB' 'Active(anon): 6373148 kB' 'Inactive(anon): 0 kB' 'Active(file): 289064 kB' 'Inactive(file): 4306116 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10656596 kB' 'Mapped: 101488 kB' 'AnonPages: 314888 kB' 'Shmem: 6061416 kB' 'KernelStack: 13688 kB' 'PageTables: 5228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 197232 kB' 'Slab: 511792 kB' 'SReclaimable: 197232 kB' 'SUnreclaim: 314560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.874 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.875 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.876 node0=1024 expecting 1024 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.876 00:03:30.876 real 0m6.699s 00:03:30.876 user 0m2.487s 00:03:30.876 sys 0m4.272s 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.876 01:06:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.876 ************************************ 00:03:30.876 END TEST no_shrink_alloc 00:03:30.876 ************************************ 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.876 01:06:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.876 00:03:30.876 real 0m26.381s 00:03:30.876 user 0m9.258s 00:03:30.876 sys 0m15.893s 00:03:30.876 01:06:06 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.876 01:06:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.876 ************************************ 00:03:30.876 END TEST hugepages 00:03:30.876 ************************************ 00:03:30.876 01:06:06 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.876 01:06:06 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.876 01:06:06 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.876 01:06:06 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.876 ************************************ 00:03:30.876 START TEST driver 00:03:30.876 ************************************ 00:03:30.876 01:06:06 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:31.135 * Looking for test storage... 00:03:31.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:31.135 01:06:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:31.135 01:06:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.135 01:06:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.410 01:06:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:36.410 01:06:11 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:36.410 01:06:11 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:36.410 01:06:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.410 ************************************ 00:03:36.410 START TEST guess_driver 00:03:36.410 ************************************ 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:36.410 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:36.410 Looking for driver=vfio-pci 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.410 01:06:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.980 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.981 01:06:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.888 01:06:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.084 00:03:45.084 real 0m9.489s 00:03:45.084 user 0m2.423s 00:03:45.084 sys 0m4.715s 00:03:45.084 01:06:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.084 01:06:20 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.084 ************************************ 00:03:45.084 END TEST guess_driver 00:03:45.084 ************************************ 00:03:45.084 00:03:45.084 real 0m14.098s 00:03:45.084 user 0m3.608s 00:03:45.084 sys 0m7.247s 00:03:45.084 01:06:20 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:45.084 01:06:20 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:45.084 ************************************ 00:03:45.084 END TEST driver 00:03:45.084 ************************************ 00:03:45.084 01:06:20 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:45.084 01:06:20 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:45.084 01:06:20 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:45.084 01:06:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.084 ************************************ 00:03:45.084 START TEST devices 00:03:45.084 ************************************ 00:03:45.084 01:06:20 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:45.343 * Looking for test storage... 00:03:45.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.343 01:06:20 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:45.343 01:06:20 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:45.343 01:06:20 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.343 01:06:20 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:49.542 01:06:24 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:49.542 01:06:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:49.543 01:06:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:49.543 01:06:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:49.543 No valid GPT data, bailing 00:03:49.543 01:06:24 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:49.543 01:06:24 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:49.543 01:06:24 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:49.543 01:06:24 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:49.543 01:06:24 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:49.543 01:06:24 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:49.543 01:06:24 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:49.543 01:06:24 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:49.543 01:06:24 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:49.543 01:06:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:49.543 ************************************ 00:03:49.543 START TEST nvme_mount 00:03:49.543 ************************************ 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:49.543 01:06:24 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:50.112 Creating new GPT entries in memory. 00:03:50.112 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:50.112 other utilities. 00:03:50.112 01:06:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:50.112 01:06:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:50.112 01:06:25 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:50.112 01:06:25 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:50.112 01:06:25 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:51.050 Creating new GPT entries in memory. 00:03:51.050 The operation has completed successfully. 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3895305 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.050 01:06:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:54.339 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:54.339 01:06:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:54.598 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:54.598 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:03:54.598 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:54.598 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.598 01:06:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:57.888 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.888 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.888 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:57.889 01:06:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.889 01:06:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.425 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:00.425 00:04:00.425 real 0m11.318s 00:04:00.425 user 0m3.019s 00:04:00.425 sys 0m6.079s 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:00.425 01:06:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:00.425 ************************************ 00:04:00.425 END TEST nvme_mount 00:04:00.425 ************************************ 00:04:00.425 01:06:35 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:00.425 01:06:35 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:00.425 01:06:35 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:00.425 01:06:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:00.425 ************************************ 00:04:00.425 START TEST dm_mount 00:04:00.425 ************************************ 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:00.425 01:06:35 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:01.362 Creating new GPT entries in memory. 00:04:01.362 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:01.362 other utilities. 00:04:01.362 01:06:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:01.362 01:06:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.362 01:06:36 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:01.362 01:06:36 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:01.362 01:06:36 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:02.299 Creating new GPT entries in memory. 00:04:02.299 The operation has completed successfully. 00:04:02.299 01:06:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:02.299 01:06:37 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:02.299 01:06:37 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:02.299 01:06:37 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:02.299 01:06:37 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:03.702 The operation has completed successfully. 00:04:03.702 01:06:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:03.702 01:06:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:03.702 01:06:38 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3899466 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.702 01:06:39 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.994 01:06:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.530 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:09.790 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:09.790 00:04:09.790 real 0m9.528s 00:04:09.790 user 0m2.165s 00:04:09.790 sys 0m4.389s 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.790 01:06:45 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:09.790 ************************************ 00:04:09.790 END TEST dm_mount 00:04:09.790 ************************************ 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.050 01:06:45 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.308 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.308 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.309 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.309 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.309 01:06:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:10.309 00:04:10.309 real 0m25.091s 00:04:10.309 user 0m6.610s 00:04:10.309 sys 0m13.199s 00:04:10.309 01:06:45 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.309 01:06:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:10.309 ************************************ 00:04:10.309 END TEST devices 00:04:10.309 ************************************ 00:04:10.309 00:04:10.309 real 1m28.768s 00:04:10.309 user 0m26.399s 00:04:10.309 sys 0m50.462s 00:04:10.309 01:06:45 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.309 01:06:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.309 ************************************ 00:04:10.309 END TEST setup.sh 00:04:10.309 ************************************ 00:04:10.309 01:06:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:13.601 Hugepages 00:04:13.601 node hugesize free / total 00:04:13.601 node0 1048576kB 0 / 0 00:04:13.601 node0 2048kB 2048 / 2048 00:04:13.601 node1 1048576kB 0 / 0 00:04:13.601 node1 2048kB 0 / 0 00:04:13.601 00:04:13.601 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:13.601 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:13.601 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:13.601 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:13.601 01:06:49 -- spdk/autotest.sh@130 -- # uname -s 00:04:13.601 01:06:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:13.601 01:06:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:13.601 01:06:49 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:16.889 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:16.889 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.794 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:18.794 01:06:54 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:19.732 01:06:55 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:19.732 01:06:55 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:19.732 01:06:55 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:19.732 01:06:55 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:19.732 01:06:55 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:19.732 01:06:55 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:19.732 01:06:55 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.732 01:06:55 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.732 01:06:55 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:19.732 01:06:55 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:19.732 01:06:55 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:19.732 01:06:55 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.020 Waiting for block devices as requested 00:04:23.020 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.020 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.279 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.279 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.279 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.538 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.538 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.538 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:23.797 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:23.797 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:04:24.056 01:06:59 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:24.056 01:06:59 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1498 -- # grep 0000:d8:00.0/nvme/nvme 00:04:24.056 01:06:59 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:04:24.056 01:06:59 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:24.056 01:06:59 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:24.056 01:06:59 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:24.056 01:06:59 -- common/autotest_common.sh@1541 -- # oacs=' 0xe' 00:04:24.056 01:06:59 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:24.056 01:06:59 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:24.056 01:06:59 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:24.056 01:06:59 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:24.056 01:06:59 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:24.056 01:06:59 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:24.056 01:06:59 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:24.056 01:06:59 -- common/autotest_common.sh@1553 -- # continue 00:04:24.056 01:06:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:24.056 01:06:59 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:24.056 01:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:24.056 01:06:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:24.056 01:06:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:24.056 01:06:59 -- common/autotest_common.sh@10 -- # set +x 00:04:24.056 01:06:59 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.405 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.405 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:29.311 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:29.311 01:07:04 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:29.311 01:07:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.311 01:07:04 -- common/autotest_common.sh@10 -- # set +x 00:04:29.311 01:07:04 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:29.311 01:07:04 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:29.311 01:07:04 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:29.311 01:07:04 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:29.311 01:07:04 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:29.311 01:07:04 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:29.311 01:07:04 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:29.311 01:07:04 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:29.311 01:07:04 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:29.311 01:07:04 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:29.311 01:07:04 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:29.311 01:07:04 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:04:29.311 01:07:04 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:04:29.311 01:07:04 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:29.311 01:07:04 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:04:29.311 01:07:04 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:04:29.311 01:07:04 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:29.311 01:07:04 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:04:29.311 01:07:04 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:d8:00.0 00:04:29.311 01:07:04 -- common/autotest_common.sh@1588 -- # [[ -z 0000:d8:00.0 ]] 00:04:29.311 01:07:04 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=3908987 00:04:29.311 01:07:04 -- common/autotest_common.sh@1594 -- # waitforlisten 3908987 00:04:29.311 01:07:04 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.311 01:07:04 -- common/autotest_common.sh@827 -- # '[' -z 3908987 ']' 00:04:29.311 01:07:04 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.311 01:07:04 -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:29.311 01:07:04 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.311 01:07:04 -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:29.311 01:07:04 -- common/autotest_common.sh@10 -- # set +x 00:04:29.311 [2024-05-15 01:07:04.847106] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:29.311 [2024-05-15 01:07:04.847158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3908987 ] 00:04:29.311 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.311 [2024-05-15 01:07:04.917625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.311 [2024-05-15 01:07:04.988040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.246 01:07:05 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:30.246 01:07:05 -- common/autotest_common.sh@860 -- # return 0 00:04:30.246 01:07:05 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:04:30.246 01:07:05 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:04:30.246 01:07:05 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:04:33.537 nvme0n1 00:04:33.537 01:07:08 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:33.537 [2024-05-15 01:07:08.792037] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:33.537 request: 00:04:33.537 { 00:04:33.537 "nvme_ctrlr_name": "nvme0", 00:04:33.537 "password": "test", 00:04:33.537 "method": "bdev_nvme_opal_revert", 00:04:33.537 "req_id": 1 00:04:33.537 } 00:04:33.537 Got JSON-RPC error response 00:04:33.537 response: 00:04:33.537 { 00:04:33.537 "code": -32602, 00:04:33.537 "message": "Invalid parameters" 00:04:33.537 } 00:04:33.537 01:07:08 -- common/autotest_common.sh@1600 -- # true 00:04:33.537 01:07:08 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:04:33.537 01:07:08 -- common/autotest_common.sh@1604 -- # killprocess 3908987 00:04:33.537 01:07:08 -- common/autotest_common.sh@946 -- # '[' -z 3908987 ']' 00:04:33.537 01:07:08 -- common/autotest_common.sh@950 -- # kill -0 3908987 00:04:33.537 01:07:08 -- common/autotest_common.sh@951 -- # uname 00:04:33.537 01:07:08 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:33.537 01:07:08 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3908987 00:04:33.537 01:07:08 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:33.537 01:07:08 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:33.537 01:07:08 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3908987' 00:04:33.537 killing process with pid 3908987 00:04:33.537 01:07:08 -- common/autotest_common.sh@965 -- # kill 3908987 00:04:33.537 01:07:08 -- common/autotest_common.sh@970 -- # wait 3908987 00:04:35.442 01:07:11 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:35.442 01:07:11 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:35.442 01:07:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.442 01:07:11 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:35.442 01:07:11 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:35.442 01:07:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:35.442 01:07:11 -- common/autotest_common.sh@10 -- # set +x 00:04:35.442 01:07:11 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.442 01:07:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.442 01:07:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.442 01:07:11 -- common/autotest_common.sh@10 -- # set +x 00:04:35.442 ************************************ 00:04:35.442 START TEST env 00:04:35.442 ************************************ 00:04:35.442 01:07:11 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:35.701 * Looking for test storage... 00:04:35.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:35.701 01:07:11 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.701 01:07:11 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.701 01:07:11 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.701 01:07:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.701 ************************************ 00:04:35.701 START TEST env_memory 00:04:35.701 ************************************ 00:04:35.701 01:07:11 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:35.701 00:04:35.701 00:04:35.701 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.701 http://cunit.sourceforge.net/ 00:04:35.701 00:04:35.701 00:04:35.701 Suite: memory 00:04:35.701 Test: alloc and free memory map ...[2024-05-15 01:07:11.329492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:35.701 passed 00:04:35.701 Test: mem map translation ...[2024-05-15 01:07:11.348870] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:35.701 [2024-05-15 01:07:11.348883] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:35.701 [2024-05-15 01:07:11.348934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:35.701 [2024-05-15 01:07:11.348943] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:35.701 passed 00:04:35.701 Test: mem map registration ...[2024-05-15 01:07:11.385115] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:35.701 [2024-05-15 01:07:11.385130] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:35.960 passed 00:04:35.960 Test: mem map adjacent registrations ...passed 00:04:35.960 00:04:35.960 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.960 suites 1 1 n/a 0 0 00:04:35.960 tests 4 4 4 0 0 00:04:35.960 asserts 152 152 152 0 n/a 00:04:35.960 00:04:35.960 Elapsed time = 0.138 seconds 00:04:35.960 00:04:35.960 real 0m0.153s 00:04:35.960 user 0m0.141s 00:04:35.960 sys 0m0.011s 00:04:35.960 01:07:11 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.960 01:07:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:35.960 ************************************ 00:04:35.960 END TEST env_memory 00:04:35.960 ************************************ 00:04:35.960 01:07:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.960 01:07:11 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.960 01:07:11 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.960 01:07:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.960 ************************************ 00:04:35.960 START TEST env_vtophys 00:04:35.960 ************************************ 00:04:35.960 01:07:11 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:35.960 EAL: lib.eal log level changed from notice to debug 00:04:35.960 EAL: Detected lcore 0 as core 0 on socket 0 00:04:35.960 EAL: Detected lcore 1 as core 1 on socket 0 00:04:35.960 EAL: Detected lcore 2 as core 2 on socket 0 00:04:35.960 EAL: Detected lcore 3 as core 3 on socket 0 00:04:35.960 EAL: Detected lcore 4 as core 4 on socket 0 00:04:35.960 EAL: Detected lcore 5 as core 5 on socket 0 00:04:35.960 EAL: Detected lcore 6 as core 6 on socket 0 00:04:35.960 EAL: Detected lcore 7 as core 8 on socket 0 00:04:35.960 EAL: Detected lcore 8 as core 9 on socket 0 00:04:35.960 EAL: Detected lcore 9 as core 10 on socket 0 00:04:35.960 EAL: Detected lcore 10 as core 11 on socket 0 00:04:35.960 EAL: Detected lcore 11 as core 12 on socket 0 00:04:35.960 EAL: Detected lcore 12 as core 13 on socket 0 00:04:35.960 EAL: Detected lcore 13 as core 14 on socket 0 00:04:35.960 EAL: Detected lcore 14 as core 16 on socket 0 00:04:35.960 EAL: Detected lcore 15 as core 17 on socket 0 00:04:35.960 EAL: Detected lcore 16 as core 18 on socket 0 00:04:35.960 EAL: Detected lcore 17 as core 19 on socket 0 00:04:35.960 EAL: Detected lcore 18 as core 20 on socket 0 00:04:35.960 EAL: Detected lcore 19 as core 21 on socket 0 00:04:35.960 EAL: Detected lcore 20 as core 22 on socket 0 00:04:35.960 EAL: Detected lcore 21 as core 24 on socket 0 00:04:35.960 EAL: Detected lcore 22 as core 25 on socket 0 00:04:35.960 EAL: Detected lcore 23 as core 26 on socket 0 00:04:35.960 EAL: Detected lcore 24 as core 27 on socket 0 00:04:35.960 EAL: Detected lcore 25 as core 28 on socket 0 00:04:35.960 EAL: Detected lcore 26 as core 29 on socket 0 00:04:35.960 EAL: Detected lcore 27 as core 30 on socket 0 00:04:35.960 EAL: Detected lcore 28 as core 0 on socket 1 00:04:35.960 EAL: Detected lcore 29 as core 1 on socket 1 00:04:35.960 EAL: Detected lcore 30 as core 2 on socket 1 00:04:35.960 EAL: Detected lcore 31 as core 3 on socket 1 00:04:35.960 EAL: Detected lcore 32 as core 4 on socket 1 00:04:35.960 EAL: Detected lcore 33 as core 5 on socket 1 00:04:35.960 EAL: Detected lcore 34 as core 6 on socket 1 00:04:35.960 EAL: Detected lcore 35 as core 8 on socket 1 00:04:35.960 EAL: Detected lcore 36 as core 9 on socket 1 00:04:35.960 EAL: Detected lcore 37 as core 10 on socket 1 00:04:35.960 EAL: Detected lcore 38 as core 11 on socket 1 00:04:35.960 EAL: Detected lcore 39 as core 12 on socket 1 00:04:35.960 EAL: Detected lcore 40 as core 13 on socket 1 00:04:35.960 EAL: Detected lcore 41 as core 14 on socket 1 00:04:35.960 EAL: Detected lcore 42 as core 16 on socket 1 00:04:35.960 EAL: Detected lcore 43 as core 17 on socket 1 00:04:35.960 EAL: Detected lcore 44 as core 18 on socket 1 00:04:35.961 EAL: Detected lcore 45 as core 19 on socket 1 00:04:35.961 EAL: Detected lcore 46 as core 20 on socket 1 00:04:35.961 EAL: Detected lcore 47 as core 21 on socket 1 00:04:35.961 EAL: Detected lcore 48 as core 22 on socket 1 00:04:35.961 EAL: Detected lcore 49 as core 24 on socket 1 00:04:35.961 EAL: Detected lcore 50 as core 25 on socket 1 00:04:35.961 EAL: Detected lcore 51 as core 26 on socket 1 00:04:35.961 EAL: Detected lcore 52 as core 27 on socket 1 00:04:35.961 EAL: Detected lcore 53 as core 28 on socket 1 00:04:35.961 EAL: Detected lcore 54 as core 29 on socket 1 00:04:35.961 EAL: Detected lcore 55 as core 30 on socket 1 00:04:35.961 EAL: Detected lcore 56 as core 0 on socket 0 00:04:35.961 EAL: Detected lcore 57 as core 1 on socket 0 00:04:35.961 EAL: Detected lcore 58 as core 2 on socket 0 00:04:35.961 EAL: Detected lcore 59 as core 3 on socket 0 00:04:35.961 EAL: Detected lcore 60 as core 4 on socket 0 00:04:35.961 EAL: Detected lcore 61 as core 5 on socket 0 00:04:35.961 EAL: Detected lcore 62 as core 6 on socket 0 00:04:35.961 EAL: Detected lcore 63 as core 8 on socket 0 00:04:35.961 EAL: Detected lcore 64 as core 9 on socket 0 00:04:35.961 EAL: Detected lcore 65 as core 10 on socket 0 00:04:35.961 EAL: Detected lcore 66 as core 11 on socket 0 00:04:35.961 EAL: Detected lcore 67 as core 12 on socket 0 00:04:35.961 EAL: Detected lcore 68 as core 13 on socket 0 00:04:35.961 EAL: Detected lcore 69 as core 14 on socket 0 00:04:35.961 EAL: Detected lcore 70 as core 16 on socket 0 00:04:35.961 EAL: Detected lcore 71 as core 17 on socket 0 00:04:35.961 EAL: Detected lcore 72 as core 18 on socket 0 00:04:35.961 EAL: Detected lcore 73 as core 19 on socket 0 00:04:35.961 EAL: Detected lcore 74 as core 20 on socket 0 00:04:35.961 EAL: Detected lcore 75 as core 21 on socket 0 00:04:35.961 EAL: Detected lcore 76 as core 22 on socket 0 00:04:35.961 EAL: Detected lcore 77 as core 24 on socket 0 00:04:35.961 EAL: Detected lcore 78 as core 25 on socket 0 00:04:35.961 EAL: Detected lcore 79 as core 26 on socket 0 00:04:35.961 EAL: Detected lcore 80 as core 27 on socket 0 00:04:35.961 EAL: Detected lcore 81 as core 28 on socket 0 00:04:35.961 EAL: Detected lcore 82 as core 29 on socket 0 00:04:35.961 EAL: Detected lcore 83 as core 30 on socket 0 00:04:35.961 EAL: Detected lcore 84 as core 0 on socket 1 00:04:35.961 EAL: Detected lcore 85 as core 1 on socket 1 00:04:35.961 EAL: Detected lcore 86 as core 2 on socket 1 00:04:35.961 EAL: Detected lcore 87 as core 3 on socket 1 00:04:35.961 EAL: Detected lcore 88 as core 4 on socket 1 00:04:35.961 EAL: Detected lcore 89 as core 5 on socket 1 00:04:35.961 EAL: Detected lcore 90 as core 6 on socket 1 00:04:35.961 EAL: Detected lcore 91 as core 8 on socket 1 00:04:35.961 EAL: Detected lcore 92 as core 9 on socket 1 00:04:35.961 EAL: Detected lcore 93 as core 10 on socket 1 00:04:35.961 EAL: Detected lcore 94 as core 11 on socket 1 00:04:35.961 EAL: Detected lcore 95 as core 12 on socket 1 00:04:35.961 EAL: Detected lcore 96 as core 13 on socket 1 00:04:35.961 EAL: Detected lcore 97 as core 14 on socket 1 00:04:35.961 EAL: Detected lcore 98 as core 16 on socket 1 00:04:35.961 EAL: Detected lcore 99 as core 17 on socket 1 00:04:35.961 EAL: Detected lcore 100 as core 18 on socket 1 00:04:35.961 EAL: Detected lcore 101 as core 19 on socket 1 00:04:35.961 EAL: Detected lcore 102 as core 20 on socket 1 00:04:35.961 EAL: Detected lcore 103 as core 21 on socket 1 00:04:35.961 EAL: Detected lcore 104 as core 22 on socket 1 00:04:35.961 EAL: Detected lcore 105 as core 24 on socket 1 00:04:35.961 EAL: Detected lcore 106 as core 25 on socket 1 00:04:35.961 EAL: Detected lcore 107 as core 26 on socket 1 00:04:35.961 EAL: Detected lcore 108 as core 27 on socket 1 00:04:35.961 EAL: Detected lcore 109 as core 28 on socket 1 00:04:35.961 EAL: Detected lcore 110 as core 29 on socket 1 00:04:35.961 EAL: Detected lcore 111 as core 30 on socket 1 00:04:35.961 EAL: Maximum logical cores by configuration: 128 00:04:35.961 EAL: Detected CPU lcores: 112 00:04:35.961 EAL: Detected NUMA nodes: 2 00:04:35.961 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:35.961 EAL: Detected shared linkage of DPDK 00:04:35.961 EAL: No shared files mode enabled, IPC will be disabled 00:04:35.961 EAL: Bus pci wants IOVA as 'DC' 00:04:35.961 EAL: Buses did not request a specific IOVA mode. 00:04:35.961 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:35.961 EAL: Selected IOVA mode 'VA' 00:04:35.961 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.961 EAL: Probing VFIO support... 00:04:35.961 EAL: IOMMU type 1 (Type 1) is supported 00:04:35.961 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:35.961 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:35.961 EAL: VFIO support initialized 00:04:35.961 EAL: Ask a virtual area of 0x2e000 bytes 00:04:35.961 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:35.961 EAL: Setting up physically contiguous memory... 00:04:35.961 EAL: Setting maximum number of open files to 524288 00:04:35.961 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:35.961 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:35.961 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:35.961 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:35.961 EAL: Ask a virtual area of 0x61000 bytes 00:04:35.961 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:35.961 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:35.961 EAL: Ask a virtual area of 0x400000000 bytes 00:04:35.961 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:35.961 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:35.961 EAL: Hugepages will be freed exactly as allocated. 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: TSC frequency is ~2500000 KHz 00:04:35.961 EAL: Main lcore 0 is ready (tid=7f322c269a00;cpuset=[0]) 00:04:35.961 EAL: Trying to obtain current memory policy. 00:04:35.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.961 EAL: Restoring previous memory policy: 0 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was expanded by 2MB 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:35.961 EAL: Mem event callback 'spdk:(nil)' registered 00:04:35.961 00:04:35.961 00:04:35.961 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.961 http://cunit.sourceforge.net/ 00:04:35.961 00:04:35.961 00:04:35.961 Suite: components_suite 00:04:35.961 Test: vtophys_malloc_test ...passed 00:04:35.961 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:35.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.961 EAL: Restoring previous memory policy: 4 00:04:35.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was expanded by 4MB 00:04:35.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was shrunk by 4MB 00:04:35.961 EAL: Trying to obtain current memory policy. 00:04:35.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.961 EAL: Restoring previous memory policy: 4 00:04:35.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was expanded by 6MB 00:04:35.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was shrunk by 6MB 00:04:35.961 EAL: Trying to obtain current memory policy. 00:04:35.961 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.961 EAL: Restoring previous memory policy: 4 00:04:35.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was expanded by 10MB 00:04:35.961 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.961 EAL: request: mp_malloc_sync 00:04:35.961 EAL: No shared files mode enabled, IPC is disabled 00:04:35.961 EAL: Heap on socket 0 was shrunk by 10MB 00:04:35.961 EAL: Trying to obtain current memory policy. 00:04:35.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.962 EAL: Restoring previous memory policy: 4 00:04:35.962 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.962 EAL: request: mp_malloc_sync 00:04:35.962 EAL: No shared files mode enabled, IPC is disabled 00:04:35.962 EAL: Heap on socket 0 was expanded by 18MB 00:04:35.962 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.962 EAL: request: mp_malloc_sync 00:04:35.962 EAL: No shared files mode enabled, IPC is disabled 00:04:35.962 EAL: Heap on socket 0 was shrunk by 18MB 00:04:35.962 EAL: Trying to obtain current memory policy. 00:04:35.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.962 EAL: Restoring previous memory policy: 4 00:04:35.962 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.962 EAL: request: mp_malloc_sync 00:04:35.962 EAL: No shared files mode enabled, IPC is disabled 00:04:35.962 EAL: Heap on socket 0 was expanded by 34MB 00:04:35.962 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.962 EAL: request: mp_malloc_sync 00:04:35.962 EAL: No shared files mode enabled, IPC is disabled 00:04:35.962 EAL: Heap on socket 0 was shrunk by 34MB 00:04:35.962 EAL: Trying to obtain current memory policy. 00:04:35.962 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.221 EAL: Restoring previous memory policy: 4 00:04:36.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.221 EAL: request: mp_malloc_sync 00:04:36.221 EAL: No shared files mode enabled, IPC is disabled 00:04:36.221 EAL: Heap on socket 0 was expanded by 66MB 00:04:36.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.221 EAL: request: mp_malloc_sync 00:04:36.221 EAL: No shared files mode enabled, IPC is disabled 00:04:36.221 EAL: Heap on socket 0 was shrunk by 66MB 00:04:36.221 EAL: Trying to obtain current memory policy. 00:04:36.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.221 EAL: Restoring previous memory policy: 4 00:04:36.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.221 EAL: request: mp_malloc_sync 00:04:36.221 EAL: No shared files mode enabled, IPC is disabled 00:04:36.221 EAL: Heap on socket 0 was expanded by 130MB 00:04:36.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.221 EAL: request: mp_malloc_sync 00:04:36.221 EAL: No shared files mode enabled, IPC is disabled 00:04:36.221 EAL: Heap on socket 0 was shrunk by 130MB 00:04:36.221 EAL: Trying to obtain current memory policy. 00:04:36.221 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.221 EAL: Restoring previous memory policy: 4 00:04:36.221 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.222 EAL: request: mp_malloc_sync 00:04:36.222 EAL: No shared files mode enabled, IPC is disabled 00:04:36.222 EAL: Heap on socket 0 was expanded by 258MB 00:04:36.222 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.222 EAL: request: mp_malloc_sync 00:04:36.222 EAL: No shared files mode enabled, IPC is disabled 00:04:36.222 EAL: Heap on socket 0 was shrunk by 258MB 00:04:36.222 EAL: Trying to obtain current memory policy. 00:04:36.222 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.481 EAL: Restoring previous memory policy: 4 00:04:36.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.481 EAL: request: mp_malloc_sync 00:04:36.481 EAL: No shared files mode enabled, IPC is disabled 00:04:36.481 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.481 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.481 EAL: request: mp_malloc_sync 00:04:36.481 EAL: No shared files mode enabled, IPC is disabled 00:04:36.481 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.481 EAL: Trying to obtain current memory policy. 00:04:36.481 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.740 EAL: Restoring previous memory policy: 4 00:04:36.740 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.740 EAL: request: mp_malloc_sync 00:04:36.740 EAL: No shared files mode enabled, IPC is disabled 00:04:36.740 EAL: Heap on socket 0 was expanded by 1026MB 00:04:36.999 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.999 EAL: request: mp_malloc_sync 00:04:36.999 EAL: No shared files mode enabled, IPC is disabled 00:04:36.999 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.999 passed 00:04:36.999 00:04:36.999 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.999 suites 1 1 n/a 0 0 00:04:36.999 tests 2 2 2 0 0 00:04:36.999 asserts 497 497 497 0 n/a 00:04:36.999 00:04:36.999 Elapsed time = 0.966 seconds 00:04:36.999 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.999 EAL: request: mp_malloc_sync 00:04:36.999 EAL: No shared files mode enabled, IPC is disabled 00:04:36.999 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.999 EAL: No shared files mode enabled, IPC is disabled 00:04:36.999 EAL: No shared files mode enabled, IPC is disabled 00:04:36.999 EAL: No shared files mode enabled, IPC is disabled 00:04:36.999 00:04:36.999 real 0m1.105s 00:04:36.999 user 0m0.639s 00:04:36.999 sys 0m0.430s 00:04:36.999 01:07:12 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:36.999 01:07:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.999 ************************************ 00:04:36.999 END TEST env_vtophys 00:04:36.999 ************************************ 00:04:36.999 01:07:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:36.999 01:07:12 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:36.999 01:07:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.999 01:07:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.259 ************************************ 00:04:37.259 START TEST env_pci 00:04:37.259 ************************************ 00:04:37.259 01:07:12 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:37.259 00:04:37.259 00:04:37.259 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.259 http://cunit.sourceforge.net/ 00:04:37.259 00:04:37.259 00:04:37.259 Suite: pci 00:04:37.259 Test: pci_hook ...[2024-05-15 01:07:12.726598] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3910527 has claimed it 00:04:37.259 EAL: Cannot find device (10000:00:01.0) 00:04:37.259 EAL: Failed to attach device on primary process 00:04:37.259 passed 00:04:37.259 00:04:37.259 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.259 suites 1 1 n/a 0 0 00:04:37.259 tests 1 1 1 0 0 00:04:37.259 asserts 25 25 25 0 n/a 00:04:37.259 00:04:37.259 Elapsed time = 0.033 seconds 00:04:37.259 00:04:37.259 real 0m0.052s 00:04:37.259 user 0m0.015s 00:04:37.259 sys 0m0.037s 00:04:37.259 01:07:12 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:37.259 01:07:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:37.259 ************************************ 00:04:37.259 END TEST env_pci 00:04:37.259 ************************************ 00:04:37.259 01:07:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:37.259 01:07:12 env -- env/env.sh@15 -- # uname 00:04:37.259 01:07:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:37.259 01:07:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:37.259 01:07:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.259 01:07:12 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:37.259 01:07:12 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:37.259 01:07:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.259 ************************************ 00:04:37.259 START TEST env_dpdk_post_init 00:04:37.259 ************************************ 00:04:37.259 01:07:12 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:37.259 EAL: Detected CPU lcores: 112 00:04:37.259 EAL: Detected NUMA nodes: 2 00:04:37.259 EAL: Detected shared linkage of DPDK 00:04:37.259 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.259 EAL: Selected IOVA mode 'VA' 00:04:37.259 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.259 EAL: VFIO support initialized 00:04:37.259 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.519 EAL: Using IOMMU type 1 (Type 1) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:37.519 EAL: Ignore mapping IO port bar(1) 00:04:37.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:37.520 EAL: Ignore mapping IO port bar(1) 00:04:37.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:37.520 EAL: Ignore mapping IO port bar(1) 00:04:37.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:37.520 EAL: Ignore mapping IO port bar(1) 00:04:37.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:37.520 EAL: Ignore mapping IO port bar(1) 00:04:37.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:37.520 EAL: Ignore mapping IO port bar(1) 00:04:37.520 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:38.457 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:04:41.746 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:04:41.746 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:04:42.314 Starting DPDK initialization... 00:04:42.314 Starting SPDK post initialization... 00:04:42.314 SPDK NVMe probe 00:04:42.314 Attaching to 0000:d8:00.0 00:04:42.314 Attached to 0000:d8:00.0 00:04:42.314 Cleaning up... 00:04:42.314 00:04:42.314 real 0m4.933s 00:04:42.315 user 0m3.636s 00:04:42.315 sys 0m0.355s 00:04:42.315 01:07:17 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.315 01:07:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.315 ************************************ 00:04:42.315 END TEST env_dpdk_post_init 00:04:42.315 ************************************ 00:04:42.315 01:07:17 env -- env/env.sh@26 -- # uname 00:04:42.315 01:07:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.315 01:07:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.315 01:07:17 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.315 01:07:17 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.315 01:07:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.315 ************************************ 00:04:42.315 START TEST env_mem_callbacks 00:04:42.315 ************************************ 00:04:42.315 01:07:17 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.315 EAL: Detected CPU lcores: 112 00:04:42.315 EAL: Detected NUMA nodes: 2 00:04:42.315 EAL: Detected shared linkage of DPDK 00:04:42.315 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.315 EAL: Selected IOVA mode 'VA' 00:04:42.315 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.315 EAL: VFIO support initialized 00:04:42.315 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.315 00:04:42.315 00:04:42.315 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.315 http://cunit.sourceforge.net/ 00:04:42.315 00:04:42.315 00:04:42.315 Suite: memory 00:04:42.315 Test: test ... 00:04:42.315 register 0x200000200000 2097152 00:04:42.315 malloc 3145728 00:04:42.315 register 0x200000400000 4194304 00:04:42.315 buf 0x200000500000 len 3145728 PASSED 00:04:42.315 malloc 64 00:04:42.315 buf 0x2000004fff40 len 64 PASSED 00:04:42.315 malloc 4194304 00:04:42.315 register 0x200000800000 6291456 00:04:42.315 buf 0x200000a00000 len 4194304 PASSED 00:04:42.315 free 0x200000500000 3145728 00:04:42.315 free 0x2000004fff40 64 00:04:42.315 unregister 0x200000400000 4194304 PASSED 00:04:42.315 free 0x200000a00000 4194304 00:04:42.315 unregister 0x200000800000 6291456 PASSED 00:04:42.315 malloc 8388608 00:04:42.315 register 0x200000400000 10485760 00:04:42.315 buf 0x200000600000 len 8388608 PASSED 00:04:42.315 free 0x200000600000 8388608 00:04:42.315 unregister 0x200000400000 10485760 PASSED 00:04:42.315 passed 00:04:42.315 00:04:42.315 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.315 suites 1 1 n/a 0 0 00:04:42.315 tests 1 1 1 0 0 00:04:42.315 asserts 15 15 15 0 n/a 00:04:42.315 00:04:42.315 Elapsed time = 0.005 seconds 00:04:42.315 00:04:42.315 real 0m0.061s 00:04:42.315 user 0m0.017s 00:04:42.315 sys 0m0.043s 00:04:42.315 01:07:17 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.315 01:07:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.315 ************************************ 00:04:42.315 END TEST env_mem_callbacks 00:04:42.315 ************************************ 00:04:42.315 00:04:42.315 real 0m6.833s 00:04:42.315 user 0m4.614s 00:04:42.315 sys 0m1.254s 00:04:42.315 01:07:17 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.315 01:07:17 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.315 ************************************ 00:04:42.315 END TEST env 00:04:42.315 ************************************ 00:04:42.315 01:07:18 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.315 01:07:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.315 01:07:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.315 01:07:18 -- common/autotest_common.sh@10 -- # set +x 00:04:42.575 ************************************ 00:04:42.575 START TEST rpc 00:04:42.575 ************************************ 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:42.575 * Looking for test storage... 00:04:42.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.575 01:07:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3911460 00:04:42.575 01:07:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.575 01:07:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:42.575 01:07:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3911460 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@827 -- # '[' -z 3911460 ']' 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.575 01:07:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.575 [2024-05-15 01:07:18.211542] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:42.575 [2024-05-15 01:07:18.211590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3911460 ] 00:04:42.575 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.834 [2024-05-15 01:07:18.279654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.834 [2024-05-15 01:07:18.353900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:42.834 [2024-05-15 01:07:18.353938] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3911460' to capture a snapshot of events at runtime. 00:04:42.834 [2024-05-15 01:07:18.353949] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:42.834 [2024-05-15 01:07:18.353958] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:42.834 [2024-05-15 01:07:18.353965] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3911460 for offline analysis/debug. 00:04:42.834 [2024-05-15 01:07:18.353987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.402 01:07:19 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.402 01:07:19 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:43.402 01:07:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.402 01:07:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:43.402 01:07:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:43.402 01:07:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:43.402 01:07:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.402 01:07:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.402 01:07:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.402 ************************************ 00:04:43.402 START TEST rpc_integrity 00:04:43.402 ************************************ 00:04:43.402 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:43.402 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:43.402 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.402 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.402 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.402 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:43.402 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:43.661 { 00:04:43.661 "name": "Malloc0", 00:04:43.661 "aliases": [ 00:04:43.661 "cd47b90b-6d1e-44f7-a7ce-8defb7cbf601" 00:04:43.661 ], 00:04:43.661 "product_name": "Malloc disk", 00:04:43.661 "block_size": 512, 00:04:43.661 "num_blocks": 16384, 00:04:43.661 "uuid": "cd47b90b-6d1e-44f7-a7ce-8defb7cbf601", 00:04:43.661 "assigned_rate_limits": { 00:04:43.661 "rw_ios_per_sec": 0, 00:04:43.661 "rw_mbytes_per_sec": 0, 00:04:43.661 "r_mbytes_per_sec": 0, 00:04:43.661 "w_mbytes_per_sec": 0 00:04:43.661 }, 00:04:43.661 "claimed": false, 00:04:43.661 "zoned": false, 00:04:43.661 "supported_io_types": { 00:04:43.661 "read": true, 00:04:43.661 "write": true, 00:04:43.661 "unmap": true, 00:04:43.661 "write_zeroes": true, 00:04:43.661 "flush": true, 00:04:43.661 "reset": true, 00:04:43.661 "compare": false, 00:04:43.661 "compare_and_write": false, 00:04:43.661 "abort": true, 00:04:43.661 "nvme_admin": false, 00:04:43.661 "nvme_io": false 00:04:43.661 }, 00:04:43.661 "memory_domains": [ 00:04:43.661 { 00:04:43.661 "dma_device_id": "system", 00:04:43.661 "dma_device_type": 1 00:04:43.661 }, 00:04:43.661 { 00:04:43.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.661 "dma_device_type": 2 00:04:43.661 } 00:04:43.661 ], 00:04:43.661 "driver_specific": {} 00:04:43.661 } 00:04:43.661 ]' 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.661 [2024-05-15 01:07:19.176641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:43.661 [2024-05-15 01:07:19.176670] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:43.661 [2024-05-15 01:07:19.176684] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe000d0 00:04:43.661 [2024-05-15 01:07:19.176692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:43.661 [2024-05-15 01:07:19.177753] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:43.661 [2024-05-15 01:07:19.177775] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:43.661 Passthru0 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.661 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.661 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:43.662 { 00:04:43.662 "name": "Malloc0", 00:04:43.662 "aliases": [ 00:04:43.662 "cd47b90b-6d1e-44f7-a7ce-8defb7cbf601" 00:04:43.662 ], 00:04:43.662 "product_name": "Malloc disk", 00:04:43.662 "block_size": 512, 00:04:43.662 "num_blocks": 16384, 00:04:43.662 "uuid": "cd47b90b-6d1e-44f7-a7ce-8defb7cbf601", 00:04:43.662 "assigned_rate_limits": { 00:04:43.662 "rw_ios_per_sec": 0, 00:04:43.662 "rw_mbytes_per_sec": 0, 00:04:43.662 "r_mbytes_per_sec": 0, 00:04:43.662 "w_mbytes_per_sec": 0 00:04:43.662 }, 00:04:43.662 "claimed": true, 00:04:43.662 "claim_type": "exclusive_write", 00:04:43.662 "zoned": false, 00:04:43.662 "supported_io_types": { 00:04:43.662 "read": true, 00:04:43.662 "write": true, 00:04:43.662 "unmap": true, 00:04:43.662 "write_zeroes": true, 00:04:43.662 "flush": true, 00:04:43.662 "reset": true, 00:04:43.662 "compare": false, 00:04:43.662 "compare_and_write": false, 00:04:43.662 "abort": true, 00:04:43.662 "nvme_admin": false, 00:04:43.662 "nvme_io": false 00:04:43.662 }, 00:04:43.662 "memory_domains": [ 00:04:43.662 { 00:04:43.662 "dma_device_id": "system", 00:04:43.662 "dma_device_type": 1 00:04:43.662 }, 00:04:43.662 { 00:04:43.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.662 "dma_device_type": 2 00:04:43.662 } 00:04:43.662 ], 00:04:43.662 "driver_specific": {} 00:04:43.662 }, 00:04:43.662 { 00:04:43.662 "name": "Passthru0", 00:04:43.662 "aliases": [ 00:04:43.662 "b7160dca-55a3-51f4-a4f0-7befb62b4cbc" 00:04:43.662 ], 00:04:43.662 "product_name": "passthru", 00:04:43.662 "block_size": 512, 00:04:43.662 "num_blocks": 16384, 00:04:43.662 "uuid": "b7160dca-55a3-51f4-a4f0-7befb62b4cbc", 00:04:43.662 "assigned_rate_limits": { 00:04:43.662 "rw_ios_per_sec": 0, 00:04:43.662 "rw_mbytes_per_sec": 0, 00:04:43.662 "r_mbytes_per_sec": 0, 00:04:43.662 "w_mbytes_per_sec": 0 00:04:43.662 }, 00:04:43.662 "claimed": false, 00:04:43.662 "zoned": false, 00:04:43.662 "supported_io_types": { 00:04:43.662 "read": true, 00:04:43.662 "write": true, 00:04:43.662 "unmap": true, 00:04:43.662 "write_zeroes": true, 00:04:43.662 "flush": true, 00:04:43.662 "reset": true, 00:04:43.662 "compare": false, 00:04:43.662 "compare_and_write": false, 00:04:43.662 "abort": true, 00:04:43.662 "nvme_admin": false, 00:04:43.662 "nvme_io": false 00:04:43.662 }, 00:04:43.662 "memory_domains": [ 00:04:43.662 { 00:04:43.662 "dma_device_id": "system", 00:04:43.662 "dma_device_type": 1 00:04:43.662 }, 00:04:43.662 { 00:04:43.662 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.662 "dma_device_type": 2 00:04:43.662 } 00:04:43.662 ], 00:04:43.662 "driver_specific": { 00:04:43.662 "passthru": { 00:04:43.662 "name": "Passthru0", 00:04:43.662 "base_bdev_name": "Malloc0" 00:04:43.662 } 00:04:43.662 } 00:04:43.662 } 00:04:43.662 ]' 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:43.662 01:07:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:43.662 00:04:43.662 real 0m0.264s 00:04:43.662 user 0m0.152s 00:04:43.662 sys 0m0.057s 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.662 01:07:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:43.662 ************************************ 00:04:43.662 END TEST rpc_integrity 00:04:43.662 ************************************ 00:04:43.923 01:07:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:43.923 01:07:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.923 01:07:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.923 01:07:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.923 ************************************ 00:04:43.923 START TEST rpc_plugins 00:04:43.923 ************************************ 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:43.923 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.923 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:43.923 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.923 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.923 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:43.923 { 00:04:43.923 "name": "Malloc1", 00:04:43.923 "aliases": [ 00:04:43.923 "e45158ae-6f50-401c-b6c2-1d3d4ead037e" 00:04:43.923 ], 00:04:43.923 "product_name": "Malloc disk", 00:04:43.923 "block_size": 4096, 00:04:43.923 "num_blocks": 256, 00:04:43.923 "uuid": "e45158ae-6f50-401c-b6c2-1d3d4ead037e", 00:04:43.923 "assigned_rate_limits": { 00:04:43.923 "rw_ios_per_sec": 0, 00:04:43.923 "rw_mbytes_per_sec": 0, 00:04:43.923 "r_mbytes_per_sec": 0, 00:04:43.923 "w_mbytes_per_sec": 0 00:04:43.923 }, 00:04:43.923 "claimed": false, 00:04:43.923 "zoned": false, 00:04:43.923 "supported_io_types": { 00:04:43.923 "read": true, 00:04:43.923 "write": true, 00:04:43.923 "unmap": true, 00:04:43.923 "write_zeroes": true, 00:04:43.923 "flush": true, 00:04:43.923 "reset": true, 00:04:43.923 "compare": false, 00:04:43.923 "compare_and_write": false, 00:04:43.923 "abort": true, 00:04:43.923 "nvme_admin": false, 00:04:43.923 "nvme_io": false 00:04:43.923 }, 00:04:43.923 "memory_domains": [ 00:04:43.923 { 00:04:43.923 "dma_device_id": "system", 00:04:43.924 "dma_device_type": 1 00:04:43.924 }, 00:04:43.924 { 00:04:43.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:43.924 "dma_device_type": 2 00:04:43.924 } 00:04:43.924 ], 00:04:43.924 "driver_specific": {} 00:04:43.924 } 00:04:43.924 ]' 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:43.924 01:07:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:43.924 00:04:43.924 real 0m0.142s 00:04:43.924 user 0m0.089s 00:04:43.924 sys 0m0.025s 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.924 01:07:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:43.924 ************************************ 00:04:43.924 END TEST rpc_plugins 00:04:43.924 ************************************ 00:04:43.924 01:07:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:43.924 01:07:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.924 01:07:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.924 01:07:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.221 ************************************ 00:04:44.221 START TEST rpc_trace_cmd_test 00:04:44.221 ************************************ 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.221 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3911460", 00:04:44.221 "tpoint_group_mask": "0x8", 00:04:44.221 "iscsi_conn": { 00:04:44.221 "mask": "0x2", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "scsi": { 00:04:44.221 "mask": "0x4", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "bdev": { 00:04:44.221 "mask": "0x8", 00:04:44.221 "tpoint_mask": "0xffffffffffffffff" 00:04:44.221 }, 00:04:44.221 "nvmf_rdma": { 00:04:44.221 "mask": "0x10", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "nvmf_tcp": { 00:04:44.221 "mask": "0x20", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "ftl": { 00:04:44.221 "mask": "0x40", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "blobfs": { 00:04:44.221 "mask": "0x80", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "dsa": { 00:04:44.221 "mask": "0x200", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "thread": { 00:04:44.221 "mask": "0x400", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "nvme_pcie": { 00:04:44.221 "mask": "0x800", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "iaa": { 00:04:44.221 "mask": "0x1000", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "nvme_tcp": { 00:04:44.221 "mask": "0x2000", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "bdev_nvme": { 00:04:44.221 "mask": "0x4000", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 }, 00:04:44.221 "sock": { 00:04:44.221 "mask": "0x8000", 00:04:44.221 "tpoint_mask": "0x0" 00:04:44.221 } 00:04:44.221 }' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:44.221 00:04:44.221 real 0m0.224s 00:04:44.221 user 0m0.189s 00:04:44.221 sys 0m0.028s 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.221 01:07:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.221 ************************************ 00:04:44.221 END TEST rpc_trace_cmd_test 00:04:44.221 ************************************ 00:04:44.221 01:07:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:44.221 01:07:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:44.221 01:07:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:44.221 01:07:19 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.221 01:07:19 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.221 01:07:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.480 ************************************ 00:04:44.480 START TEST rpc_daemon_integrity 00:04:44.480 ************************************ 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.480 01:07:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.480 { 00:04:44.480 "name": "Malloc2", 00:04:44.480 "aliases": [ 00:04:44.480 "90f1104d-4437-45b2-ac32-81cde1878107" 00:04:44.480 ], 00:04:44.480 "product_name": "Malloc disk", 00:04:44.480 "block_size": 512, 00:04:44.480 "num_blocks": 16384, 00:04:44.480 "uuid": "90f1104d-4437-45b2-ac32-81cde1878107", 00:04:44.480 "assigned_rate_limits": { 00:04:44.480 "rw_ios_per_sec": 0, 00:04:44.480 "rw_mbytes_per_sec": 0, 00:04:44.480 "r_mbytes_per_sec": 0, 00:04:44.480 "w_mbytes_per_sec": 0 00:04:44.480 }, 00:04:44.480 "claimed": false, 00:04:44.480 "zoned": false, 00:04:44.480 "supported_io_types": { 00:04:44.480 "read": true, 00:04:44.480 "write": true, 00:04:44.480 "unmap": true, 00:04:44.480 "write_zeroes": true, 00:04:44.480 "flush": true, 00:04:44.480 "reset": true, 00:04:44.480 "compare": false, 00:04:44.480 "compare_and_write": false, 00:04:44.480 "abort": true, 00:04:44.480 "nvme_admin": false, 00:04:44.480 "nvme_io": false 00:04:44.480 }, 00:04:44.480 "memory_domains": [ 00:04:44.480 { 00:04:44.480 "dma_device_id": "system", 00:04:44.480 "dma_device_type": 1 00:04:44.480 }, 00:04:44.480 { 00:04:44.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.480 "dma_device_type": 2 00:04:44.480 } 00:04:44.480 ], 00:04:44.480 "driver_specific": {} 00:04:44.480 } 00:04:44.480 ]' 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.480 [2024-05-15 01:07:20.067067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:44.480 [2024-05-15 01:07:20.067107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.480 [2024-05-15 01:07:20.067132] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdffe80 00:04:44.480 [2024-05-15 01:07:20.067145] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.480 [2024-05-15 01:07:20.068330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.480 [2024-05-15 01:07:20.068359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.480 Passthru0 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.480 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.480 { 00:04:44.480 "name": "Malloc2", 00:04:44.480 "aliases": [ 00:04:44.480 "90f1104d-4437-45b2-ac32-81cde1878107" 00:04:44.480 ], 00:04:44.480 "product_name": "Malloc disk", 00:04:44.480 "block_size": 512, 00:04:44.480 "num_blocks": 16384, 00:04:44.480 "uuid": "90f1104d-4437-45b2-ac32-81cde1878107", 00:04:44.480 "assigned_rate_limits": { 00:04:44.480 "rw_ios_per_sec": 0, 00:04:44.480 "rw_mbytes_per_sec": 0, 00:04:44.480 "r_mbytes_per_sec": 0, 00:04:44.480 "w_mbytes_per_sec": 0 00:04:44.480 }, 00:04:44.480 "claimed": true, 00:04:44.480 "claim_type": "exclusive_write", 00:04:44.480 "zoned": false, 00:04:44.480 "supported_io_types": { 00:04:44.480 "read": true, 00:04:44.480 "write": true, 00:04:44.480 "unmap": true, 00:04:44.480 "write_zeroes": true, 00:04:44.480 "flush": true, 00:04:44.480 "reset": true, 00:04:44.480 "compare": false, 00:04:44.480 "compare_and_write": false, 00:04:44.480 "abort": true, 00:04:44.480 "nvme_admin": false, 00:04:44.480 "nvme_io": false 00:04:44.480 }, 00:04:44.480 "memory_domains": [ 00:04:44.480 { 00:04:44.480 "dma_device_id": "system", 00:04:44.480 "dma_device_type": 1 00:04:44.480 }, 00:04:44.480 { 00:04:44.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.480 "dma_device_type": 2 00:04:44.480 } 00:04:44.481 ], 00:04:44.481 "driver_specific": {} 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "name": "Passthru0", 00:04:44.481 "aliases": [ 00:04:44.481 "8ecf11f7-05ab-50a4-8c61-2d031b353151" 00:04:44.481 ], 00:04:44.481 "product_name": "passthru", 00:04:44.481 "block_size": 512, 00:04:44.481 "num_blocks": 16384, 00:04:44.481 "uuid": "8ecf11f7-05ab-50a4-8c61-2d031b353151", 00:04:44.481 "assigned_rate_limits": { 00:04:44.481 "rw_ios_per_sec": 0, 00:04:44.481 "rw_mbytes_per_sec": 0, 00:04:44.481 "r_mbytes_per_sec": 0, 00:04:44.481 "w_mbytes_per_sec": 0 00:04:44.481 }, 00:04:44.481 "claimed": false, 00:04:44.481 "zoned": false, 00:04:44.481 "supported_io_types": { 00:04:44.481 "read": true, 00:04:44.481 "write": true, 00:04:44.481 "unmap": true, 00:04:44.481 "write_zeroes": true, 00:04:44.481 "flush": true, 00:04:44.481 "reset": true, 00:04:44.481 "compare": false, 00:04:44.481 "compare_and_write": false, 00:04:44.481 "abort": true, 00:04:44.481 "nvme_admin": false, 00:04:44.481 "nvme_io": false 00:04:44.481 }, 00:04:44.481 "memory_domains": [ 00:04:44.481 { 00:04:44.481 "dma_device_id": "system", 00:04:44.481 "dma_device_type": 1 00:04:44.481 }, 00:04:44.481 { 00:04:44.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.481 "dma_device_type": 2 00:04:44.481 } 00:04:44.481 ], 00:04:44.481 "driver_specific": { 00:04:44.481 "passthru": { 00:04:44.481 "name": "Passthru0", 00:04:44.481 "base_bdev_name": "Malloc2" 00:04:44.481 } 00:04:44.481 } 00:04:44.481 } 00:04:44.481 ]' 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.481 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.740 01:07:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.740 00:04:44.740 real 0m0.267s 00:04:44.740 user 0m0.166s 00:04:44.740 sys 0m0.044s 00:04:44.740 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.740 01:07:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.740 ************************************ 00:04:44.740 END TEST rpc_daemon_integrity 00:04:44.740 ************************************ 00:04:44.740 01:07:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:44.740 01:07:20 rpc -- rpc/rpc.sh@84 -- # killprocess 3911460 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@946 -- # '[' -z 3911460 ']' 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@950 -- # kill -0 3911460 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@951 -- # uname 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3911460 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3911460' 00:04:44.740 killing process with pid 3911460 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@965 -- # kill 3911460 00:04:44.740 01:07:20 rpc -- common/autotest_common.sh@970 -- # wait 3911460 00:04:44.999 00:04:44.999 real 0m2.574s 00:04:44.999 user 0m3.224s 00:04:44.999 sys 0m0.812s 00:04:44.999 01:07:20 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:44.999 01:07:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.999 ************************************ 00:04:44.999 END TEST rpc 00:04:44.999 ************************************ 00:04:44.999 01:07:20 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:44.999 01:07:20 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:44.999 01:07:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:44.999 01:07:20 -- common/autotest_common.sh@10 -- # set +x 00:04:45.259 ************************************ 00:04:45.259 START TEST skip_rpc 00:04:45.259 ************************************ 00:04:45.259 01:07:20 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:45.259 * Looking for test storage... 00:04:45.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:45.259 01:07:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.259 01:07:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:45.259 01:07:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:45.259 01:07:20 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.259 01:07:20 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.259 01:07:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.259 ************************************ 00:04:45.259 START TEST skip_rpc 00:04:45.259 ************************************ 00:04:45.259 01:07:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:45.259 01:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3912172 00:04:45.259 01:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:45.259 01:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:45.259 01:07:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:45.259 [2024-05-15 01:07:20.918665] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:45.259 [2024-05-15 01:07:20.918711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3912172 ] 00:04:45.259 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.519 [2024-05-15 01:07:20.984750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.519 [2024-05-15 01:07:21.052553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3912172 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 3912172 ']' 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 3912172 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3912172 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3912172' 00:04:50.794 killing process with pid 3912172 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 3912172 00:04:50.794 01:07:25 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 3912172 00:04:50.794 00:04:50.794 real 0m5.398s 00:04:50.794 user 0m5.157s 00:04:50.794 sys 0m0.280s 00:04:50.794 01:07:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.794 01:07:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.794 ************************************ 00:04:50.794 END TEST skip_rpc 00:04:50.794 ************************************ 00:04:50.794 01:07:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.794 01:07:26 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:50.794 01:07:26 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.794 01:07:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.794 ************************************ 00:04:50.794 START TEST skip_rpc_with_json 00:04:50.794 ************************************ 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3913177 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3913177 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 3913177 ']' 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:50.794 01:07:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.794 [2024-05-15 01:07:26.405100] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:50.794 [2024-05-15 01:07:26.405145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3913177 ] 00:04:50.794 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.794 [2024-05-15 01:07:26.472721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.054 [2024-05-15 01:07:26.547743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.623 [2024-05-15 01:07:27.201430] nvmf_rpc.c:2546:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.623 request: 00:04:51.623 { 00:04:51.623 "trtype": "tcp", 00:04:51.623 "method": "nvmf_get_transports", 00:04:51.623 "req_id": 1 00:04:51.623 } 00:04:51.623 Got JSON-RPC error response 00:04:51.623 response: 00:04:51.623 { 00:04:51.623 "code": -19, 00:04:51.623 "message": "No such device" 00:04:51.623 } 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.623 [2024-05-15 01:07:27.213533] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.623 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.883 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.883 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:51.883 { 00:04:51.883 "subsystems": [ 00:04:51.883 { 00:04:51.883 "subsystem": "vfio_user_target", 00:04:51.883 "config": null 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "keyring", 00:04:51.883 "config": [] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "iobuf", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "iobuf_set_options", 00:04:51.883 "params": { 00:04:51.883 "small_pool_count": 8192, 00:04:51.883 "large_pool_count": 1024, 00:04:51.883 "small_bufsize": 8192, 00:04:51.883 "large_bufsize": 135168 00:04:51.883 } 00:04:51.883 } 00:04:51.883 ] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "sock", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "sock_impl_set_options", 00:04:51.883 "params": { 00:04:51.883 "impl_name": "posix", 00:04:51.883 "recv_buf_size": 2097152, 00:04:51.883 "send_buf_size": 2097152, 00:04:51.883 "enable_recv_pipe": true, 00:04:51.883 "enable_quickack": false, 00:04:51.883 "enable_placement_id": 0, 00:04:51.883 "enable_zerocopy_send_server": true, 00:04:51.883 "enable_zerocopy_send_client": false, 00:04:51.883 "zerocopy_threshold": 0, 00:04:51.883 "tls_version": 0, 00:04:51.883 "enable_ktls": false 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "sock_impl_set_options", 00:04:51.883 "params": { 00:04:51.883 "impl_name": "ssl", 00:04:51.883 "recv_buf_size": 4096, 00:04:51.883 "send_buf_size": 4096, 00:04:51.883 "enable_recv_pipe": true, 00:04:51.883 "enable_quickack": false, 00:04:51.883 "enable_placement_id": 0, 00:04:51.883 "enable_zerocopy_send_server": true, 00:04:51.883 "enable_zerocopy_send_client": false, 00:04:51.883 "zerocopy_threshold": 0, 00:04:51.883 "tls_version": 0, 00:04:51.883 "enable_ktls": false 00:04:51.883 } 00:04:51.883 } 00:04:51.883 ] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "vmd", 00:04:51.883 "config": [] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "accel", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "accel_set_options", 00:04:51.883 "params": { 00:04:51.883 "small_cache_size": 128, 00:04:51.883 "large_cache_size": 16, 00:04:51.883 "task_count": 2048, 00:04:51.883 "sequence_count": 2048, 00:04:51.883 "buf_count": 2048 00:04:51.883 } 00:04:51.883 } 00:04:51.883 ] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "bdev", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "bdev_set_options", 00:04:51.883 "params": { 00:04:51.883 "bdev_io_pool_size": 65535, 00:04:51.883 "bdev_io_cache_size": 256, 00:04:51.883 "bdev_auto_examine": true, 00:04:51.883 "iobuf_small_cache_size": 128, 00:04:51.883 "iobuf_large_cache_size": 16 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "bdev_raid_set_options", 00:04:51.883 "params": { 00:04:51.883 "process_window_size_kb": 1024 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "bdev_iscsi_set_options", 00:04:51.883 "params": { 00:04:51.883 "timeout_sec": 30 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "bdev_nvme_set_options", 00:04:51.883 "params": { 00:04:51.883 "action_on_timeout": "none", 00:04:51.883 "timeout_us": 0, 00:04:51.883 "timeout_admin_us": 0, 00:04:51.883 "keep_alive_timeout_ms": 10000, 00:04:51.883 "arbitration_burst": 0, 00:04:51.883 "low_priority_weight": 0, 00:04:51.883 "medium_priority_weight": 0, 00:04:51.883 "high_priority_weight": 0, 00:04:51.883 "nvme_adminq_poll_period_us": 10000, 00:04:51.883 "nvme_ioq_poll_period_us": 0, 00:04:51.883 "io_queue_requests": 0, 00:04:51.883 "delay_cmd_submit": true, 00:04:51.883 "transport_retry_count": 4, 00:04:51.883 "bdev_retry_count": 3, 00:04:51.883 "transport_ack_timeout": 0, 00:04:51.883 "ctrlr_loss_timeout_sec": 0, 00:04:51.883 "reconnect_delay_sec": 0, 00:04:51.883 "fast_io_fail_timeout_sec": 0, 00:04:51.883 "disable_auto_failback": false, 00:04:51.883 "generate_uuids": false, 00:04:51.883 "transport_tos": 0, 00:04:51.883 "nvme_error_stat": false, 00:04:51.883 "rdma_srq_size": 0, 00:04:51.883 "io_path_stat": false, 00:04:51.883 "allow_accel_sequence": false, 00:04:51.883 "rdma_max_cq_size": 0, 00:04:51.883 "rdma_cm_event_timeout_ms": 0, 00:04:51.883 "dhchap_digests": [ 00:04:51.883 "sha256", 00:04:51.883 "sha384", 00:04:51.883 "sha512" 00:04:51.883 ], 00:04:51.883 "dhchap_dhgroups": [ 00:04:51.883 "null", 00:04:51.883 "ffdhe2048", 00:04:51.883 "ffdhe3072", 00:04:51.883 "ffdhe4096", 00:04:51.883 "ffdhe6144", 00:04:51.883 "ffdhe8192" 00:04:51.883 ] 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "bdev_nvme_set_hotplug", 00:04:51.883 "params": { 00:04:51.883 "period_us": 100000, 00:04:51.883 "enable": false 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "bdev_wait_for_examine" 00:04:51.883 } 00:04:51.883 ] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "scsi", 00:04:51.883 "config": null 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "scheduler", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "framework_set_scheduler", 00:04:51.883 "params": { 00:04:51.883 "name": "static" 00:04:51.883 } 00:04:51.883 } 00:04:51.883 ] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "vhost_scsi", 00:04:51.883 "config": [] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "vhost_blk", 00:04:51.883 "config": [] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "ublk", 00:04:51.883 "config": [] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "nbd", 00:04:51.883 "config": [] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "nvmf", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "nvmf_set_config", 00:04:51.883 "params": { 00:04:51.883 "discovery_filter": "match_any", 00:04:51.883 "admin_cmd_passthru": { 00:04:51.883 "identify_ctrlr": false 00:04:51.883 } 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "nvmf_set_max_subsystems", 00:04:51.883 "params": { 00:04:51.883 "max_subsystems": 1024 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "nvmf_set_crdt", 00:04:51.883 "params": { 00:04:51.883 "crdt1": 0, 00:04:51.883 "crdt2": 0, 00:04:51.883 "crdt3": 0 00:04:51.883 } 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "method": "nvmf_create_transport", 00:04:51.883 "params": { 00:04:51.883 "trtype": "TCP", 00:04:51.883 "max_queue_depth": 128, 00:04:51.883 "max_io_qpairs_per_ctrlr": 127, 00:04:51.883 "in_capsule_data_size": 4096, 00:04:51.883 "max_io_size": 131072, 00:04:51.883 "io_unit_size": 131072, 00:04:51.883 "max_aq_depth": 128, 00:04:51.883 "num_shared_buffers": 511, 00:04:51.883 "buf_cache_size": 4294967295, 00:04:51.883 "dif_insert_or_strip": false, 00:04:51.883 "zcopy": false, 00:04:51.883 "c2h_success": true, 00:04:51.883 "sock_priority": 0, 00:04:51.883 "abort_timeout_sec": 1, 00:04:51.883 "ack_timeout": 0, 00:04:51.883 "data_wr_pool_size": 0 00:04:51.883 } 00:04:51.883 } 00:04:51.883 ] 00:04:51.883 }, 00:04:51.883 { 00:04:51.883 "subsystem": "iscsi", 00:04:51.883 "config": [ 00:04:51.883 { 00:04:51.883 "method": "iscsi_set_options", 00:04:51.883 "params": { 00:04:51.883 "node_base": "iqn.2016-06.io.spdk", 00:04:51.883 "max_sessions": 128, 00:04:51.883 "max_connections_per_session": 2, 00:04:51.883 "max_queue_depth": 64, 00:04:51.883 "default_time2wait": 2, 00:04:51.883 "default_time2retain": 20, 00:04:51.883 "first_burst_length": 8192, 00:04:51.883 "immediate_data": true, 00:04:51.883 "allow_duplicated_isid": false, 00:04:51.883 "error_recovery_level": 0, 00:04:51.883 "nop_timeout": 60, 00:04:51.883 "nop_in_interval": 30, 00:04:51.883 "disable_chap": false, 00:04:51.883 "require_chap": false, 00:04:51.883 "mutual_chap": false, 00:04:51.883 "chap_group": 0, 00:04:51.883 "max_large_datain_per_connection": 64, 00:04:51.884 "max_r2t_per_connection": 4, 00:04:51.884 "pdu_pool_size": 36864, 00:04:51.884 "immediate_data_pool_size": 16384, 00:04:51.884 "data_out_pool_size": 2048 00:04:51.884 } 00:04:51.884 } 00:04:51.884 ] 00:04:51.884 } 00:04:51.884 ] 00:04:51.884 } 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3913177 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3913177 ']' 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3913177 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3913177 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3913177' 00:04:51.884 killing process with pid 3913177 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3913177 00:04:51.884 01:07:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3913177 00:04:52.143 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3913393 00:04:52.143 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.144 01:07:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3913393 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 3913393 ']' 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 3913393 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3913393 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3913393' 00:04:57.420 killing process with pid 3913393 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 3913393 00:04:57.420 01:07:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 3913393 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.680 00:04:57.680 real 0m6.818s 00:04:57.680 user 0m6.607s 00:04:57.680 sys 0m0.657s 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.680 ************************************ 00:04:57.680 END TEST skip_rpc_with_json 00:04:57.680 ************************************ 00:04:57.680 01:07:33 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:57.680 01:07:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.680 01:07:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.680 01:07:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.680 ************************************ 00:04:57.680 START TEST skip_rpc_with_delay 00:04:57.680 ************************************ 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:57.680 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:57.680 [2024-05-15 01:07:33.317775] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:57.681 [2024-05-15 01:07:33.317847] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:57.681 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:57.681 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.681 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.681 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.681 00:04:57.681 real 0m0.072s 00:04:57.681 user 0m0.041s 00:04:57.681 sys 0m0.031s 00:04:57.681 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.681 01:07:33 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:57.681 ************************************ 00:04:57.681 END TEST skip_rpc_with_delay 00:04:57.681 ************************************ 00:04:57.681 01:07:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:57.949 01:07:33 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:57.949 01:07:33 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:57.949 01:07:33 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:57.949 01:07:33 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.949 01:07:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.949 ************************************ 00:04:57.949 START TEST exit_on_failed_rpc_init 00:04:57.949 ************************************ 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3914393 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3914393 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 3914393 ']' 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:57.949 01:07:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:57.949 [2024-05-15 01:07:33.477761] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:57.949 [2024-05-15 01:07:33.477810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914393 ] 00:04:57.949 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.949 [2024-05-15 01:07:33.548815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.949 [2024-05-15 01:07:33.615552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.885 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:58.885 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:58.885 01:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:58.886 [2024-05-15 01:07:34.330536] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:58.886 [2024-05-15 01:07:34.330585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914659 ] 00:04:58.886 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.886 [2024-05-15 01:07:34.399126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.886 [2024-05-15 01:07:34.469375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.886 [2024-05-15 01:07:34.469448] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:58.886 [2024-05-15 01:07:34.469460] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:58.886 [2024-05-15 01:07:34.469468] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3914393 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 3914393 ']' 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 3914393 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:58.886 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3914393 00:04:59.145 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:59.145 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:59.145 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3914393' 00:04:59.145 killing process with pid 3914393 00:04:59.145 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 3914393 00:04:59.145 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 3914393 00:04:59.404 00:04:59.404 real 0m1.523s 00:04:59.404 user 0m1.748s 00:04:59.404 sys 0m0.436s 00:04:59.404 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.404 01:07:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.404 ************************************ 00:04:59.404 END TEST exit_on_failed_rpc_init 00:04:59.404 ************************************ 00:04:59.404 01:07:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.404 00:04:59.404 real 0m14.272s 00:04:59.404 user 0m13.711s 00:04:59.404 sys 0m1.721s 00:04:59.404 01:07:34 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.404 01:07:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.404 ************************************ 00:04:59.404 END TEST skip_rpc 00:04:59.404 ************************************ 00:04:59.404 01:07:35 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.404 01:07:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.404 01:07:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.404 01:07:35 -- common/autotest_common.sh@10 -- # set +x 00:04:59.404 ************************************ 00:04:59.404 START TEST rpc_client 00:04:59.404 ************************************ 00:04:59.404 01:07:35 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:59.662 * Looking for test storage... 00:04:59.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:59.662 01:07:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:59.662 OK 00:04:59.662 01:07:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:59.662 00:04:59.662 real 0m0.137s 00:04:59.662 user 0m0.063s 00:04:59.662 sys 0m0.084s 00:04:59.662 01:07:35 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.662 01:07:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:59.662 ************************************ 00:04:59.662 END TEST rpc_client 00:04:59.662 ************************************ 00:04:59.662 01:07:35 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.662 01:07:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:59.662 01:07:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.662 01:07:35 -- common/autotest_common.sh@10 -- # set +x 00:04:59.662 ************************************ 00:04:59.662 START TEST json_config 00:04:59.662 ************************************ 00:04:59.662 01:07:35 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:59.921 01:07:35 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:59.921 01:07:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:59.921 01:07:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.922 01:07:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.922 01:07:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.922 01:07:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.922 01:07:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.922 01:07:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.922 01:07:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.922 01:07:35 json_config -- paths/export.sh@5 -- # export PATH 00:04:59.922 01:07:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@47 -- # : 0 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.922 01:07:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:59.922 INFO: JSON configuration test init 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.922 01:07:35 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:59.922 01:07:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:59.922 01:07:35 json_config -- json_config/common.sh@10 -- # shift 00:04:59.922 01:07:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:59.922 01:07:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:59.922 01:07:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:59.922 01:07:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.922 01:07:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:59.922 01:07:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3914860 00:04:59.922 01:07:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:59.922 Waiting for target to run... 00:04:59.922 01:07:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:59.922 01:07:35 json_config -- json_config/common.sh@25 -- # waitforlisten 3914860 /var/tmp/spdk_tgt.sock 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@827 -- # '[' -z 3914860 ']' 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:59.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:59.922 01:07:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:59.922 [2024-05-15 01:07:35.482675] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:04:59.922 [2024-05-15 01:07:35.482723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3914860 ] 00:04:59.922 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.490 [2024-05-15 01:07:35.921085] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.490 [2024-05-15 01:07:36.010657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.749 01:07:36 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:00.749 01:07:36 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:00.749 01:07:36 json_config -- json_config/common.sh@26 -- # echo '' 00:05:00.749 00:05:00.749 01:07:36 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:00.749 01:07:36 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:00.749 01:07:36 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.749 01:07:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.749 01:07:36 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:00.749 01:07:36 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:00.749 01:07:36 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.749 01:07:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.749 01:07:36 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:00.749 01:07:36 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:00.749 01:07:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:04.038 01:07:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:04.038 01:07:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:04.038 01:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:04.038 01:07:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.038 01:07:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:04.038 01:07:39 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:04.038 01:07:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:04.038 01:07:39 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.038 01:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:04.297 MallocForNvmf0 00:05:04.297 01:07:39 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.297 01:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:04.297 MallocForNvmf1 00:05:04.297 01:07:39 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.297 01:07:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:04.556 [2024-05-15 01:07:40.134292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:04.556 01:07:40 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.556 01:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:04.815 01:07:40 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.815 01:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:04.815 01:07:40 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:04.815 01:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:05.073 01:07:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.073 01:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:05.334 [2024-05-15 01:07:40.795987] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:05.334 [2024-05-15 01:07:40.796390] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:05.334 01:07:40 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:05.334 01:07:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.334 01:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.334 01:07:40 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:05.334 01:07:40 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.334 01:07:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.334 01:07:40 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:05.334 01:07:40 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.334 01:07:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:05.646 MallocBdevForConfigChangeCheck 00:05:05.646 01:07:41 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:05.646 01:07:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.646 01:07:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.646 01:07:41 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:05.646 01:07:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:05.906 01:07:41 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:05.906 INFO: shutting down applications... 00:05:05.906 01:07:41 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:05.906 01:07:41 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:05.906 01:07:41 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:05.906 01:07:41 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:08.443 Calling clear_iscsi_subsystem 00:05:08.443 Calling clear_nvmf_subsystem 00:05:08.443 Calling clear_nbd_subsystem 00:05:08.443 Calling clear_ublk_subsystem 00:05:08.443 Calling clear_vhost_blk_subsystem 00:05:08.443 Calling clear_vhost_scsi_subsystem 00:05:08.443 Calling clear_bdev_subsystem 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@345 -- # break 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:08.443 01:07:43 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:08.443 01:07:43 json_config -- json_config/common.sh@31 -- # local app=target 00:05:08.443 01:07:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.443 01:07:43 json_config -- json_config/common.sh@35 -- # [[ -n 3914860 ]] 00:05:08.443 01:07:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3914860 00:05:08.443 [2024-05-15 01:07:43.868794] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:08.443 01:07:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.443 01:07:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.443 01:07:43 json_config -- json_config/common.sh@41 -- # kill -0 3914860 00:05:08.443 01:07:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.703 01:07:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.703 01:07:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.703 01:07:44 json_config -- json_config/common.sh@41 -- # kill -0 3914860 00:05:08.703 01:07:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:08.703 01:07:44 json_config -- json_config/common.sh@43 -- # break 00:05:08.703 01:07:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:08.703 01:07:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:08.703 SPDK target shutdown done 00:05:08.703 01:07:44 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:08.703 INFO: relaunching applications... 00:05:08.703 01:07:44 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.703 01:07:44 json_config -- json_config/common.sh@9 -- # local app=target 00:05:08.703 01:07:44 json_config -- json_config/common.sh@10 -- # shift 00:05:08.703 01:07:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.703 01:07:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.703 01:07:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.703 01:07:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.703 01:07:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.703 01:07:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3916509 00:05:08.703 01:07:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:08.703 01:07:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.703 Waiting for target to run... 00:05:08.703 01:07:44 json_config -- json_config/common.sh@25 -- # waitforlisten 3916509 /var/tmp/spdk_tgt.sock 00:05:08.703 01:07:44 json_config -- common/autotest_common.sh@827 -- # '[' -z 3916509 ']' 00:05:08.703 01:07:44 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.703 01:07:44 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:08.703 01:07:44 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.703 01:07:44 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:08.703 01:07:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:08.963 [2024-05-15 01:07:44.422416] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:08.963 [2024-05-15 01:07:44.422479] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916509 ] 00:05:08.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.222 [2024-05-15 01:07:44.856528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.481 [2024-05-15 01:07:44.939464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.773 [2024-05-15 01:07:47.960227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:12.773 [2024-05-15 01:07:47.992231] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:12.773 [2024-05-15 01:07:47.992610] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:13.033 01:07:48 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:13.033 01:07:48 json_config -- common/autotest_common.sh@860 -- # return 0 00:05:13.033 01:07:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:13.033 00:05:13.033 01:07:48 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:13.033 01:07:48 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:13.033 INFO: Checking if target configuration is the same... 00:05:13.033 01:07:48 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.033 01:07:48 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:13.033 01:07:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.033 + '[' 2 -ne 2 ']' 00:05:13.033 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.033 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.033 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.033 +++ basename /dev/fd/62 00:05:13.033 ++ mktemp /tmp/62.XXX 00:05:13.033 + tmp_file_1=/tmp/62.sNK 00:05:13.033 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.033 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.033 + tmp_file_2=/tmp/spdk_tgt_config.json.gyn 00:05:13.033 + ret=0 00:05:13.033 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.292 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.292 + diff -u /tmp/62.sNK /tmp/spdk_tgt_config.json.gyn 00:05:13.292 + echo 'INFO: JSON config files are the same' 00:05:13.292 INFO: JSON config files are the same 00:05:13.292 + rm /tmp/62.sNK /tmp/spdk_tgt_config.json.gyn 00:05:13.292 + exit 0 00:05:13.292 01:07:48 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:13.292 01:07:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:13.292 INFO: changing configuration and checking if this can be detected... 00:05:13.292 01:07:48 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.292 01:07:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:13.552 01:07:49 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.552 01:07:49 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:13.552 01:07:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.552 + '[' 2 -ne 2 ']' 00:05:13.552 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:13.552 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:13.552 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.552 +++ basename /dev/fd/62 00:05:13.552 ++ mktemp /tmp/62.XXX 00:05:13.552 + tmp_file_1=/tmp/62.mYr 00:05:13.552 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:13.552 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:13.552 + tmp_file_2=/tmp/spdk_tgt_config.json.1mQ 00:05:13.552 + ret=0 00:05:13.552 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.811 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:13.811 + diff -u /tmp/62.mYr /tmp/spdk_tgt_config.json.1mQ 00:05:13.811 + ret=1 00:05:13.811 + echo '=== Start of file: /tmp/62.mYr ===' 00:05:13.811 + cat /tmp/62.mYr 00:05:13.811 + echo '=== End of file: /tmp/62.mYr ===' 00:05:13.811 + echo '' 00:05:13.811 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1mQ ===' 00:05:13.811 + cat /tmp/spdk_tgt_config.json.1mQ 00:05:13.811 + echo '=== End of file: /tmp/spdk_tgt_config.json.1mQ ===' 00:05:13.811 + echo '' 00:05:13.811 + rm /tmp/62.mYr /tmp/spdk_tgt_config.json.1mQ 00:05:13.811 + exit 1 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:13.811 INFO: configuration change detected. 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:13.811 01:07:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:13.811 01:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@317 -- # [[ -n 3916509 ]] 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:13.811 01:07:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:13.811 01:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:13.811 01:07:49 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:13.811 01:07:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:13.811 01:07:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.071 01:07:49 json_config -- json_config/json_config.sh@323 -- # killprocess 3916509 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@946 -- # '[' -z 3916509 ']' 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@950 -- # kill -0 3916509 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@951 -- # uname 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3916509 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3916509' 00:05:14.071 killing process with pid 3916509 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@965 -- # kill 3916509 00:05:14.071 [2024-05-15 01:07:49.571854] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:14.071 01:07:49 json_config -- common/autotest_common.sh@970 -- # wait 3916509 00:05:15.974 01:07:51 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.974 01:07:51 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:15.974 01:07:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.974 01:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.974 01:07:51 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:15.974 01:07:51 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:15.974 INFO: Success 00:05:15.974 00:05:15.974 real 0m16.326s 00:05:15.974 user 0m16.718s 00:05:15.974 sys 0m2.273s 00:05:15.974 01:07:51 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.974 01:07:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.974 ************************************ 00:05:15.974 END TEST json_config 00:05:15.974 ************************************ 00:05:16.232 01:07:51 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:16.232 01:07:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.232 01:07:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.232 01:07:51 -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 ************************************ 00:05:16.232 START TEST json_config_extra_key 00:05:16.232 ************************************ 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:16.232 01:07:51 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:16.232 01:07:51 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:16.232 01:07:51 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:16.232 01:07:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.232 01:07:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.232 01:07:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.232 01:07:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:16.232 01:07:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:16.232 01:07:51 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:16.232 INFO: launching applications... 00:05:16.232 01:07:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3917950 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:16.232 Waiting for target to run... 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3917950 /var/tmp/spdk_tgt.sock 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 3917950 ']' 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:16.232 01:07:51 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:16.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.232 01:07:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:16.232 [2024-05-15 01:07:51.879508] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:16.232 [2024-05-15 01:07:51.879563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917950 ] 00:05:16.232 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.489 [2024-05-15 01:07:52.166064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.747 [2024-05-15 01:07:52.230183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.006 01:07:52 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.006 01:07:52 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:17.006 00:05:17.006 01:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:17.006 INFO: shutting down applications... 00:05:17.006 01:07:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3917950 ]] 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3917950 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3917950 00:05:17.006 01:07:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3917950 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.574 01:07:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.574 SPDK target shutdown done 00:05:17.574 01:07:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.574 Success 00:05:17.574 00:05:17.574 real 0m1.446s 00:05:17.574 user 0m1.193s 00:05:17.574 sys 0m0.414s 00:05:17.574 01:07:53 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.574 01:07:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.574 ************************************ 00:05:17.574 END TEST json_config_extra_key 00:05:17.574 ************************************ 00:05:17.574 01:07:53 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.574 01:07:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.574 01:07:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.574 01:07:53 -- common/autotest_common.sh@10 -- # set +x 00:05:17.574 ************************************ 00:05:17.574 START TEST alias_rpc 00:05:17.574 ************************************ 00:05:17.574 01:07:53 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.834 * Looking for test storage... 00:05:17.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:17.834 01:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.834 01:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3918268 00:05:17.834 01:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:17.834 01:07:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3918268 00:05:17.834 01:07:53 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 3918268 ']' 00:05:17.834 01:07:53 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.834 01:07:53 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:17.834 01:07:53 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.834 01:07:53 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:17.834 01:07:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.834 [2024-05-15 01:07:53.423277] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:17.834 [2024-05-15 01:07:53.423325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918268 ] 00:05:17.834 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.834 [2024-05-15 01:07:53.492594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.093 [2024-05-15 01:07:53.562412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.660 01:07:54 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.660 01:07:54 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:18.660 01:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:18.919 01:07:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3918268 00:05:18.919 01:07:54 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 3918268 ']' 00:05:18.919 01:07:54 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 3918268 00:05:18.919 01:07:54 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:05:18.919 01:07:54 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:18.919 01:07:54 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3918268 00:05:18.920 01:07:54 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:18.920 01:07:54 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:18.920 01:07:54 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3918268' 00:05:18.920 killing process with pid 3918268 00:05:18.920 01:07:54 alias_rpc -- common/autotest_common.sh@965 -- # kill 3918268 00:05:18.920 01:07:54 alias_rpc -- common/autotest_common.sh@970 -- # wait 3918268 00:05:19.179 00:05:19.179 real 0m1.522s 00:05:19.179 user 0m1.618s 00:05:19.179 sys 0m0.434s 00:05:19.179 01:07:54 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.179 01:07:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.179 ************************************ 00:05:19.179 END TEST alias_rpc 00:05:19.179 ************************************ 00:05:19.179 01:07:54 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:19.179 01:07:54 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:19.179 01:07:54 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.179 01:07:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.179 01:07:54 -- common/autotest_common.sh@10 -- # set +x 00:05:19.179 ************************************ 00:05:19.179 START TEST spdkcli_tcp 00:05:19.179 ************************************ 00:05:19.179 01:07:54 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:19.439 * Looking for test storage... 00:05:19.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3918588 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3918588 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 3918588 ']' 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:19.439 01:07:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:19.439 01:07:54 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:19.439 [2024-05-15 01:07:55.034976] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:19.439 [2024-05-15 01:07:55.035032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918588 ] 00:05:19.439 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.439 [2024-05-15 01:07:55.103780] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.699 [2024-05-15 01:07:55.180432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.699 [2024-05-15 01:07:55.180436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.268 01:07:55 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:20.268 01:07:55 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:05:20.268 01:07:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3918854 00:05:20.268 01:07:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:20.268 01:07:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:20.528 [ 00:05:20.528 "bdev_malloc_delete", 00:05:20.528 "bdev_malloc_create", 00:05:20.528 "bdev_null_resize", 00:05:20.528 "bdev_null_delete", 00:05:20.528 "bdev_null_create", 00:05:20.528 "bdev_nvme_cuse_unregister", 00:05:20.528 "bdev_nvme_cuse_register", 00:05:20.528 "bdev_opal_new_user", 00:05:20.528 "bdev_opal_set_lock_state", 00:05:20.528 "bdev_opal_delete", 00:05:20.528 "bdev_opal_get_info", 00:05:20.528 "bdev_opal_create", 00:05:20.528 "bdev_nvme_opal_revert", 00:05:20.528 "bdev_nvme_opal_init", 00:05:20.528 "bdev_nvme_send_cmd", 00:05:20.528 "bdev_nvme_get_path_iostat", 00:05:20.528 "bdev_nvme_get_mdns_discovery_info", 00:05:20.528 "bdev_nvme_stop_mdns_discovery", 00:05:20.528 "bdev_nvme_start_mdns_discovery", 00:05:20.528 "bdev_nvme_set_multipath_policy", 00:05:20.528 "bdev_nvme_set_preferred_path", 00:05:20.528 "bdev_nvme_get_io_paths", 00:05:20.528 "bdev_nvme_remove_error_injection", 00:05:20.528 "bdev_nvme_add_error_injection", 00:05:20.528 "bdev_nvme_get_discovery_info", 00:05:20.528 "bdev_nvme_stop_discovery", 00:05:20.528 "bdev_nvme_start_discovery", 00:05:20.528 "bdev_nvme_get_controller_health_info", 00:05:20.528 "bdev_nvme_disable_controller", 00:05:20.528 "bdev_nvme_enable_controller", 00:05:20.528 "bdev_nvme_reset_controller", 00:05:20.528 "bdev_nvme_get_transport_statistics", 00:05:20.528 "bdev_nvme_apply_firmware", 00:05:20.528 "bdev_nvme_detach_controller", 00:05:20.528 "bdev_nvme_get_controllers", 00:05:20.528 "bdev_nvme_attach_controller", 00:05:20.528 "bdev_nvme_set_hotplug", 00:05:20.528 "bdev_nvme_set_options", 00:05:20.528 "bdev_passthru_delete", 00:05:20.528 "bdev_passthru_create", 00:05:20.528 "bdev_lvol_check_shallow_copy", 00:05:20.528 "bdev_lvol_start_shallow_copy", 00:05:20.528 "bdev_lvol_grow_lvstore", 00:05:20.528 "bdev_lvol_get_lvols", 00:05:20.528 "bdev_lvol_get_lvstores", 00:05:20.528 "bdev_lvol_delete", 00:05:20.528 "bdev_lvol_set_read_only", 00:05:20.528 "bdev_lvol_resize", 00:05:20.528 "bdev_lvol_decouple_parent", 00:05:20.528 "bdev_lvol_inflate", 00:05:20.528 "bdev_lvol_rename", 00:05:20.528 "bdev_lvol_clone_bdev", 00:05:20.528 "bdev_lvol_clone", 00:05:20.528 "bdev_lvol_snapshot", 00:05:20.528 "bdev_lvol_create", 00:05:20.528 "bdev_lvol_delete_lvstore", 00:05:20.528 "bdev_lvol_rename_lvstore", 00:05:20.528 "bdev_lvol_create_lvstore", 00:05:20.528 "bdev_raid_set_options", 00:05:20.528 "bdev_raid_remove_base_bdev", 00:05:20.528 "bdev_raid_add_base_bdev", 00:05:20.528 "bdev_raid_delete", 00:05:20.528 "bdev_raid_create", 00:05:20.528 "bdev_raid_get_bdevs", 00:05:20.528 "bdev_error_inject_error", 00:05:20.528 "bdev_error_delete", 00:05:20.528 "bdev_error_create", 00:05:20.528 "bdev_split_delete", 00:05:20.528 "bdev_split_create", 00:05:20.528 "bdev_delay_delete", 00:05:20.528 "bdev_delay_create", 00:05:20.528 "bdev_delay_update_latency", 00:05:20.528 "bdev_zone_block_delete", 00:05:20.528 "bdev_zone_block_create", 00:05:20.528 "blobfs_create", 00:05:20.528 "blobfs_detect", 00:05:20.528 "blobfs_set_cache_size", 00:05:20.528 "bdev_aio_delete", 00:05:20.528 "bdev_aio_rescan", 00:05:20.528 "bdev_aio_create", 00:05:20.528 "bdev_ftl_set_property", 00:05:20.528 "bdev_ftl_get_properties", 00:05:20.528 "bdev_ftl_get_stats", 00:05:20.528 "bdev_ftl_unmap", 00:05:20.528 "bdev_ftl_unload", 00:05:20.528 "bdev_ftl_delete", 00:05:20.528 "bdev_ftl_load", 00:05:20.528 "bdev_ftl_create", 00:05:20.528 "bdev_virtio_attach_controller", 00:05:20.528 "bdev_virtio_scsi_get_devices", 00:05:20.528 "bdev_virtio_detach_controller", 00:05:20.528 "bdev_virtio_blk_set_hotplug", 00:05:20.528 "bdev_iscsi_delete", 00:05:20.528 "bdev_iscsi_create", 00:05:20.528 "bdev_iscsi_set_options", 00:05:20.528 "accel_error_inject_error", 00:05:20.528 "ioat_scan_accel_module", 00:05:20.528 "dsa_scan_accel_module", 00:05:20.528 "iaa_scan_accel_module", 00:05:20.528 "vfu_virtio_create_scsi_endpoint", 00:05:20.528 "vfu_virtio_scsi_remove_target", 00:05:20.528 "vfu_virtio_scsi_add_target", 00:05:20.528 "vfu_virtio_create_blk_endpoint", 00:05:20.528 "vfu_virtio_delete_endpoint", 00:05:20.528 "keyring_file_remove_key", 00:05:20.528 "keyring_file_add_key", 00:05:20.528 "iscsi_get_histogram", 00:05:20.528 "iscsi_enable_histogram", 00:05:20.528 "iscsi_set_options", 00:05:20.528 "iscsi_get_auth_groups", 00:05:20.528 "iscsi_auth_group_remove_secret", 00:05:20.528 "iscsi_auth_group_add_secret", 00:05:20.528 "iscsi_delete_auth_group", 00:05:20.528 "iscsi_create_auth_group", 00:05:20.528 "iscsi_set_discovery_auth", 00:05:20.528 "iscsi_get_options", 00:05:20.528 "iscsi_target_node_request_logout", 00:05:20.528 "iscsi_target_node_set_redirect", 00:05:20.528 "iscsi_target_node_set_auth", 00:05:20.528 "iscsi_target_node_add_lun", 00:05:20.528 "iscsi_get_stats", 00:05:20.528 "iscsi_get_connections", 00:05:20.528 "iscsi_portal_group_set_auth", 00:05:20.528 "iscsi_start_portal_group", 00:05:20.528 "iscsi_delete_portal_group", 00:05:20.528 "iscsi_create_portal_group", 00:05:20.528 "iscsi_get_portal_groups", 00:05:20.528 "iscsi_delete_target_node", 00:05:20.528 "iscsi_target_node_remove_pg_ig_maps", 00:05:20.528 "iscsi_target_node_add_pg_ig_maps", 00:05:20.528 "iscsi_create_target_node", 00:05:20.528 "iscsi_get_target_nodes", 00:05:20.528 "iscsi_delete_initiator_group", 00:05:20.528 "iscsi_initiator_group_remove_initiators", 00:05:20.528 "iscsi_initiator_group_add_initiators", 00:05:20.528 "iscsi_create_initiator_group", 00:05:20.528 "iscsi_get_initiator_groups", 00:05:20.528 "nvmf_set_crdt", 00:05:20.528 "nvmf_set_config", 00:05:20.528 "nvmf_set_max_subsystems", 00:05:20.528 "nvmf_subsystem_get_listeners", 00:05:20.528 "nvmf_subsystem_get_qpairs", 00:05:20.528 "nvmf_subsystem_get_controllers", 00:05:20.528 "nvmf_get_stats", 00:05:20.528 "nvmf_get_transports", 00:05:20.528 "nvmf_create_transport", 00:05:20.528 "nvmf_get_targets", 00:05:20.528 "nvmf_delete_target", 00:05:20.528 "nvmf_create_target", 00:05:20.528 "nvmf_subsystem_allow_any_host", 00:05:20.528 "nvmf_subsystem_remove_host", 00:05:20.528 "nvmf_subsystem_add_host", 00:05:20.528 "nvmf_ns_remove_host", 00:05:20.528 "nvmf_ns_add_host", 00:05:20.528 "nvmf_subsystem_remove_ns", 00:05:20.528 "nvmf_subsystem_add_ns", 00:05:20.528 "nvmf_subsystem_listener_set_ana_state", 00:05:20.528 "nvmf_discovery_get_referrals", 00:05:20.528 "nvmf_discovery_remove_referral", 00:05:20.528 "nvmf_discovery_add_referral", 00:05:20.529 "nvmf_subsystem_remove_listener", 00:05:20.529 "nvmf_subsystem_add_listener", 00:05:20.529 "nvmf_delete_subsystem", 00:05:20.529 "nvmf_create_subsystem", 00:05:20.529 "nvmf_get_subsystems", 00:05:20.529 "env_dpdk_get_mem_stats", 00:05:20.529 "nbd_get_disks", 00:05:20.529 "nbd_stop_disk", 00:05:20.529 "nbd_start_disk", 00:05:20.529 "ublk_recover_disk", 00:05:20.529 "ublk_get_disks", 00:05:20.529 "ublk_stop_disk", 00:05:20.529 "ublk_start_disk", 00:05:20.529 "ublk_destroy_target", 00:05:20.529 "ublk_create_target", 00:05:20.529 "virtio_blk_create_transport", 00:05:20.529 "virtio_blk_get_transports", 00:05:20.529 "vhost_controller_set_coalescing", 00:05:20.529 "vhost_get_controllers", 00:05:20.529 "vhost_delete_controller", 00:05:20.529 "vhost_create_blk_controller", 00:05:20.529 "vhost_scsi_controller_remove_target", 00:05:20.529 "vhost_scsi_controller_add_target", 00:05:20.529 "vhost_start_scsi_controller", 00:05:20.529 "vhost_create_scsi_controller", 00:05:20.529 "thread_set_cpumask", 00:05:20.529 "framework_get_scheduler", 00:05:20.529 "framework_set_scheduler", 00:05:20.529 "framework_get_reactors", 00:05:20.529 "thread_get_io_channels", 00:05:20.529 "thread_get_pollers", 00:05:20.529 "thread_get_stats", 00:05:20.529 "framework_monitor_context_switch", 00:05:20.529 "spdk_kill_instance", 00:05:20.529 "log_enable_timestamps", 00:05:20.529 "log_get_flags", 00:05:20.529 "log_clear_flag", 00:05:20.529 "log_set_flag", 00:05:20.529 "log_get_level", 00:05:20.529 "log_set_level", 00:05:20.529 "log_get_print_level", 00:05:20.529 "log_set_print_level", 00:05:20.529 "framework_enable_cpumask_locks", 00:05:20.529 "framework_disable_cpumask_locks", 00:05:20.529 "framework_wait_init", 00:05:20.529 "framework_start_init", 00:05:20.529 "scsi_get_devices", 00:05:20.529 "bdev_get_histogram", 00:05:20.529 "bdev_enable_histogram", 00:05:20.529 "bdev_set_qos_limit", 00:05:20.529 "bdev_set_qd_sampling_period", 00:05:20.529 "bdev_get_bdevs", 00:05:20.529 "bdev_reset_iostat", 00:05:20.529 "bdev_get_iostat", 00:05:20.529 "bdev_examine", 00:05:20.529 "bdev_wait_for_examine", 00:05:20.529 "bdev_set_options", 00:05:20.529 "notify_get_notifications", 00:05:20.529 "notify_get_types", 00:05:20.529 "accel_get_stats", 00:05:20.529 "accel_set_options", 00:05:20.529 "accel_set_driver", 00:05:20.529 "accel_crypto_key_destroy", 00:05:20.529 "accel_crypto_keys_get", 00:05:20.529 "accel_crypto_key_create", 00:05:20.529 "accel_assign_opc", 00:05:20.529 "accel_get_module_info", 00:05:20.529 "accel_get_opc_assignments", 00:05:20.529 "vmd_rescan", 00:05:20.529 "vmd_remove_device", 00:05:20.529 "vmd_enable", 00:05:20.529 "sock_get_default_impl", 00:05:20.529 "sock_set_default_impl", 00:05:20.529 "sock_impl_set_options", 00:05:20.529 "sock_impl_get_options", 00:05:20.529 "iobuf_get_stats", 00:05:20.529 "iobuf_set_options", 00:05:20.529 "keyring_get_keys", 00:05:20.529 "framework_get_pci_devices", 00:05:20.529 "framework_get_config", 00:05:20.529 "framework_get_subsystems", 00:05:20.529 "vfu_tgt_set_base_path", 00:05:20.529 "trace_get_info", 00:05:20.529 "trace_get_tpoint_group_mask", 00:05:20.529 "trace_disable_tpoint_group", 00:05:20.529 "trace_enable_tpoint_group", 00:05:20.529 "trace_clear_tpoint_mask", 00:05:20.529 "trace_set_tpoint_mask", 00:05:20.529 "spdk_get_version", 00:05:20.529 "rpc_get_methods" 00:05:20.529 ] 00:05:20.529 01:07:55 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:20.529 01:07:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:20.529 01:07:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.529 01:07:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:20.529 01:07:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3918588 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 3918588 ']' 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 3918588 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3918588 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3918588' 00:05:20.529 killing process with pid 3918588 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 3918588 00:05:20.529 01:07:56 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 3918588 00:05:20.789 00:05:20.789 real 0m1.545s 00:05:20.789 user 0m2.773s 00:05:20.789 sys 0m0.497s 00:05:20.789 01:07:56 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.789 01:07:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:20.789 ************************************ 00:05:20.789 END TEST spdkcli_tcp 00:05:20.789 ************************************ 00:05:20.789 01:07:56 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:20.789 01:07:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.789 01:07:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.789 01:07:56 -- common/autotest_common.sh@10 -- # set +x 00:05:21.049 ************************************ 00:05:21.049 START TEST dpdk_mem_utility 00:05:21.049 ************************************ 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:21.049 * Looking for test storage... 00:05:21.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:21.049 01:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.049 01:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3918931 00:05:21.049 01:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3918931 00:05:21.049 01:07:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 3918931 ']' 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.049 01:07:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.049 [2024-05-15 01:07:56.656022] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:21.049 [2024-05-15 01:07:56.656068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918931 ] 00:05:21.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.049 [2024-05-15 01:07:56.726047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.309 [2024-05-15 01:07:56.796999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.878 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.879 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:05:21.879 01:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:21.879 01:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:21.879 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.879 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:21.879 { 00:05:21.879 "filename": "/tmp/spdk_mem_dump.txt" 00:05:21.879 } 00:05:21.879 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.879 01:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:21.879 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:21.879 1 heaps totaling size 814.000000 MiB 00:05:21.879 size: 814.000000 MiB heap id: 0 00:05:21.879 end heaps---------- 00:05:21.879 8 mempools totaling size 598.116089 MiB 00:05:21.879 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:21.879 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:21.879 size: 84.521057 MiB name: bdev_io_3918931 00:05:21.879 size: 51.011292 MiB name: evtpool_3918931 00:05:21.879 size: 50.003479 MiB name: msgpool_3918931 00:05:21.879 size: 21.763794 MiB name: PDU_Pool 00:05:21.879 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:21.879 size: 0.026123 MiB name: Session_Pool 00:05:21.879 end mempools------- 00:05:21.879 6 memzones totaling size 4.142822 MiB 00:05:21.879 size: 1.000366 MiB name: RG_ring_0_3918931 00:05:21.879 size: 1.000366 MiB name: RG_ring_1_3918931 00:05:21.879 size: 1.000366 MiB name: RG_ring_4_3918931 00:05:21.879 size: 1.000366 MiB name: RG_ring_5_3918931 00:05:21.879 size: 0.125366 MiB name: RG_ring_2_3918931 00:05:21.879 size: 0.015991 MiB name: RG_ring_3_3918931 00:05:21.879 end memzones------- 00:05:21.879 01:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:21.879 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:21.879 list of free elements. size: 12.519348 MiB 00:05:21.879 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:21.879 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:21.879 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:21.879 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:21.879 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:21.879 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:21.879 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:21.879 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:21.879 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:21.879 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:21.879 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:21.879 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:21.879 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:21.879 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:21.879 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:21.879 list of standard malloc elements. size: 199.218079 MiB 00:05:21.879 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:21.879 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:21.879 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:21.879 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:21.879 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:21.879 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:21.879 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:21.879 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:21.879 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:21.879 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:21.879 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:21.879 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:21.879 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:21.879 list of memzone associated elements. size: 602.262573 MiB 00:05:21.879 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:21.879 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:21.879 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:21.879 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:21.879 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:21.879 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3918931_0 00:05:21.879 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:21.879 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3918931_0 00:05:21.879 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:21.879 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3918931_0 00:05:21.879 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:21.879 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:21.879 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:21.879 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:21.879 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:21.879 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3918931 00:05:21.879 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:21.879 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3918931 00:05:21.879 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:21.879 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3918931 00:05:21.879 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:21.879 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:21.879 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:21.879 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:21.879 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:21.879 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:21.879 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:21.879 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:21.879 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:21.879 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3918931 00:05:21.879 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:21.879 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3918931 00:05:21.879 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:21.879 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3918931 00:05:21.879 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:21.879 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3918931 00:05:21.879 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:21.879 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3918931 00:05:21.879 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:21.879 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:21.879 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:21.879 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:21.879 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:21.879 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:21.879 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:21.879 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3918931 00:05:21.879 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:21.879 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:21.879 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:21.879 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:21.879 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:21.879 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3918931 00:05:21.879 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:21.879 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:21.879 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:21.879 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3918931 00:05:21.879 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:21.879 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3918931 00:05:21.879 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:21.879 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:21.879 01:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:21.879 01:07:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3918931 00:05:21.880 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 3918931 ']' 00:05:21.880 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 3918931 00:05:21.880 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3918931 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3918931' 00:05:22.139 killing process with pid 3918931 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 3918931 00:05:22.139 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 3918931 00:05:22.432 00:05:22.432 real 0m1.452s 00:05:22.432 user 0m1.488s 00:05:22.432 sys 0m0.453s 00:05:22.432 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.432 01:07:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.432 ************************************ 00:05:22.432 END TEST dpdk_mem_utility 00:05:22.432 ************************************ 00:05:22.432 01:07:57 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:22.432 01:07:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.432 01:07:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.432 01:07:57 -- common/autotest_common.sh@10 -- # set +x 00:05:22.432 ************************************ 00:05:22.432 START TEST event 00:05:22.433 ************************************ 00:05:22.433 01:07:58 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:22.433 * Looking for test storage... 00:05:22.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:22.433 01:07:58 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:22.433 01:07:58 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:22.433 01:07:58 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.433 01:07:58 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:22.433 01:07:58 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.433 01:07:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.700 ************************************ 00:05:22.700 START TEST event_perf 00:05:22.700 ************************************ 00:05:22.700 01:07:58 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:22.700 Running I/O for 1 seconds...[2024-05-15 01:07:58.182026] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:22.700 [2024-05-15 01:07:58.182105] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919255 ] 00:05:22.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.700 [2024-05-15 01:07:58.255761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.700 [2024-05-15 01:07:58.328430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.700 [2024-05-15 01:07:58.328522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.700 [2024-05-15 01:07:58.328614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.700 [2024-05-15 01:07:58.328619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.079 Running I/O for 1 seconds... 00:05:24.079 lcore 0: 198919 00:05:24.079 lcore 1: 198919 00:05:24.079 lcore 2: 198920 00:05:24.079 lcore 3: 198919 00:05:24.079 done. 00:05:24.079 00:05:24.079 real 0m1.255s 00:05:24.079 user 0m4.148s 00:05:24.079 sys 0m0.104s 00:05:24.079 01:07:59 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.079 01:07:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:24.079 ************************************ 00:05:24.079 END TEST event_perf 00:05:24.079 ************************************ 00:05:24.079 01:07:59 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.079 01:07:59 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:24.080 01:07:59 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.080 01:07:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.080 ************************************ 00:05:24.080 START TEST event_reactor 00:05:24.080 ************************************ 00:05:24.080 01:07:59 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:24.080 [2024-05-15 01:07:59.524620] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:24.080 [2024-05-15 01:07:59.524702] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919547 ] 00:05:24.080 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.080 [2024-05-15 01:07:59.597374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.080 [2024-05-15 01:07:59.664093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.457 test_start 00:05:25.457 oneshot 00:05:25.457 tick 100 00:05:25.457 tick 100 00:05:25.457 tick 250 00:05:25.457 tick 100 00:05:25.457 tick 100 00:05:25.457 tick 100 00:05:25.457 tick 250 00:05:25.457 tick 500 00:05:25.457 tick 100 00:05:25.457 tick 100 00:05:25.457 tick 250 00:05:25.457 tick 100 00:05:25.457 tick 100 00:05:25.457 test_end 00:05:25.457 00:05:25.457 real 0m1.251s 00:05:25.457 user 0m1.155s 00:05:25.457 sys 0m0.091s 00:05:25.457 01:08:00 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.457 01:08:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 ************************************ 00:05:25.457 END TEST event_reactor 00:05:25.457 ************************************ 00:05:25.457 01:08:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.457 01:08:00 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:25.457 01:08:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.457 01:08:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.457 ************************************ 00:05:25.457 START TEST event_reactor_perf 00:05:25.457 ************************************ 00:05:25.457 01:08:00 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:25.457 [2024-05-15 01:08:00.858141] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:25.457 [2024-05-15 01:08:00.858224] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3919827 ] 00:05:25.457 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.457 [2024-05-15 01:08:00.928533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.457 [2024-05-15 01:08:00.996124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.393 test_start 00:05:26.393 test_end 00:05:26.393 Performance: 524113 events per second 00:05:26.393 00:05:26.393 real 0m1.244s 00:05:26.393 user 0m1.154s 00:05:26.393 sys 0m0.086s 00:05:26.393 01:08:02 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.393 01:08:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.393 ************************************ 00:05:26.393 END TEST event_reactor_perf 00:05:26.393 ************************************ 00:05:26.651 01:08:02 event -- event/event.sh@49 -- # uname -s 00:05:26.651 01:08:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:26.651 01:08:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:26.651 01:08:02 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.651 01:08:02 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.651 01:08:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.651 ************************************ 00:05:26.651 START TEST event_scheduler 00:05:26.651 ************************************ 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:26.651 * Looking for test storage... 00:05:26.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:26.651 01:08:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:26.651 01:08:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3920135 00:05:26.651 01:08:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.651 01:08:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:26.651 01:08:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3920135 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 3920135 ']' 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:26.651 01:08:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.651 [2024-05-15 01:08:02.303148] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:26.651 [2024-05-15 01:08:02.303198] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920135 ] 00:05:26.651 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.910 [2024-05-15 01:08:02.368502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.910 [2024-05-15 01:08:02.441353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.910 [2024-05-15 01:08:02.441437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.910 [2024-05-15 01:08:02.441518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.910 [2024-05-15 01:08:02.441520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:05:27.479 01:08:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.479 POWER: Env isn't set yet! 00:05:27.479 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:27.479 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:27.479 POWER: Cannot set governor of lcore 0 to userspace 00:05:27.479 POWER: Attempting to initialise PSTAT power management... 00:05:27.479 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:27.479 POWER: Initialized successfully for lcore 0 power management 00:05:27.479 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:27.479 POWER: Initialized successfully for lcore 1 power management 00:05:27.479 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:27.479 POWER: Initialized successfully for lcore 2 power management 00:05:27.479 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:27.479 POWER: Initialized successfully for lcore 3 power management 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.479 01:08:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.479 01:08:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 [2024-05-15 01:08:03.221052] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:27.739 01:08:03 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:27.739 01:08:03 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.739 01:08:03 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 ************************************ 00:05:27.739 START TEST scheduler_create_thread 00:05:27.739 ************************************ 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 2 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 3 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 4 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 5 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 6 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 7 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 8 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 9 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 10 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.739 01:08:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.674 01:08:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.674 01:08:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:28.674 01:08:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.674 01:08:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.052 01:08:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.052 01:08:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:30.052 01:08:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:30.052 01:08:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.052 01:08:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.989 01:08:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.989 00:05:30.989 real 0m3.381s 00:05:30.989 user 0m0.024s 00:05:30.989 sys 0m0.006s 00:05:30.989 01:08:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:30.989 01:08:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:30.989 ************************************ 00:05:30.989 END TEST scheduler_create_thread 00:05:30.989 ************************************ 00:05:31.288 01:08:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:31.288 01:08:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3920135 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 3920135 ']' 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 3920135 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3920135 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3920135' 00:05:31.288 killing process with pid 3920135 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 3920135 00:05:31.288 01:08:06 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 3920135 00:05:31.547 [2024-05-15 01:08:07.029044] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:31.547 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:31.547 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:31.547 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:31.547 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:31.547 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:31.547 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:31.547 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:31.547 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:31.806 00:05:31.806 real 0m5.110s 00:05:31.806 user 0m10.495s 00:05:31.806 sys 0m0.423s 00:05:31.806 01:08:07 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.806 01:08:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.806 ************************************ 00:05:31.806 END TEST event_scheduler 00:05:31.806 ************************************ 00:05:31.806 01:08:07 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:31.806 01:08:07 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:31.806 01:08:07 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.806 01:08:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.806 01:08:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.806 ************************************ 00:05:31.806 START TEST app_repeat 00:05:31.806 ************************************ 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3920996 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3920996' 00:05:31.806 Process app_repeat pid: 3920996 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:31.806 spdk_app_start Round 0 00:05:31.806 01:08:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3920996 /var/tmp/spdk-nbd.sock 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3920996 ']' 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.806 01:08:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:31.806 [2024-05-15 01:08:07.410897] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:31.806 [2024-05-15 01:08:07.410962] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920996 ] 00:05:31.806 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.806 [2024-05-15 01:08:07.482877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.065 [2024-05-15 01:08:07.556201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.065 [2024-05-15 01:08:07.556202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.631 01:08:08 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.631 01:08:08 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:32.631 01:08:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.890 Malloc0 00:05:32.890 01:08:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.890 Malloc1 00:05:33.148 01:08:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.148 /dev/nbd0 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.148 01:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.148 1+0 records in 00:05:33.148 1+0 records out 00:05:33.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000287024 s, 14.3 MB/s 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:33.148 01:08:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.149 01:08:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:33.149 01:08:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:33.149 01:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.149 01:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.149 01:08:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.407 /dev/nbd1 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.407 1+0 records in 00:05:33.407 1+0 records out 00:05:33.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217881 s, 18.8 MB/s 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:33.407 01:08:08 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.407 01:08:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.666 { 00:05:33.666 "nbd_device": "/dev/nbd0", 00:05:33.666 "bdev_name": "Malloc0" 00:05:33.666 }, 00:05:33.666 { 00:05:33.666 "nbd_device": "/dev/nbd1", 00:05:33.666 "bdev_name": "Malloc1" 00:05:33.666 } 00:05:33.666 ]' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.666 { 00:05:33.666 "nbd_device": "/dev/nbd0", 00:05:33.666 "bdev_name": "Malloc0" 00:05:33.666 }, 00:05:33.666 { 00:05:33.666 "nbd_device": "/dev/nbd1", 00:05:33.666 "bdev_name": "Malloc1" 00:05:33.666 } 00:05:33.666 ]' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.666 /dev/nbd1' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.666 /dev/nbd1' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.666 256+0 records in 00:05:33.666 256+0 records out 00:05:33.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108886 s, 96.3 MB/s 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.666 256+0 records in 00:05:33.666 256+0 records out 00:05:33.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196514 s, 53.4 MB/s 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.666 256+0 records in 00:05:33.666 256+0 records out 00:05:33.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177151 s, 59.2 MB/s 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.666 01:08:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.667 01:08:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.925 01:08:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.183 01:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.443 01:08:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.443 01:08:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.443 01:08:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.703 [2024-05-15 01:08:10.295541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.703 [2024-05-15 01:08:10.361590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.703 [2024-05-15 01:08:10.361592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.962 [2024-05-15 01:08:10.403813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.962 [2024-05-15 01:08:10.403853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.494 01:08:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.494 01:08:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.494 spdk_app_start Round 1 00:05:37.494 01:08:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3920996 /var/tmp/spdk-nbd.sock 00:05:37.494 01:08:13 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3920996 ']' 00:05:37.494 01:08:13 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.494 01:08:13 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:37.494 01:08:13 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.494 01:08:13 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:37.494 01:08:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.754 01:08:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:37.754 01:08:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:37.754 01:08:13 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.754 Malloc0 00:05:37.754 01:08:13 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.012 Malloc1 00:05:38.012 01:08:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.012 01:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.271 /dev/nbd0 00:05:38.271 01:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.271 01:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.271 1+0 records in 00:05:38.271 1+0 records out 00:05:38.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224392 s, 18.3 MB/s 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:38.271 01:08:13 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:38.271 01:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.271 01:08:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.271 01:08:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.530 /dev/nbd1 00:05:38.530 01:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.530 01:08:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:38.530 01:08:13 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.530 1+0 records in 00:05:38.530 1+0 records out 00:05:38.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242235 s, 16.9 MB/s 00:05:38.530 01:08:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.530 01:08:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:38.530 01:08:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.530 01:08:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:38.530 01:08:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd0", 00:05:38.530 "bdev_name": "Malloc0" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd1", 00:05:38.530 "bdev_name": "Malloc1" 00:05:38.530 } 00:05:38.530 ]' 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd0", 00:05:38.530 "bdev_name": "Malloc0" 00:05:38.530 }, 00:05:38.530 { 00:05:38.530 "nbd_device": "/dev/nbd1", 00:05:38.530 "bdev_name": "Malloc1" 00:05:38.530 } 00:05:38.530 ]' 00:05:38.530 01:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.791 /dev/nbd1' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.791 /dev/nbd1' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.791 256+0 records in 00:05:38.791 256+0 records out 00:05:38.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0107893 s, 97.2 MB/s 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.791 256+0 records in 00:05:38.791 256+0 records out 00:05:38.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192494 s, 54.5 MB/s 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.791 256+0 records in 00:05:38.791 256+0 records out 00:05:38.791 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191581 s, 54.7 MB/s 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.791 01:08:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.085 01:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.344 01:08:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.344 01:08:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.602 01:08:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.861 [2024-05-15 01:08:15.329775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.861 [2024-05-15 01:08:15.393101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.861 [2024-05-15 01:08:15.393103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.861 [2024-05-15 01:08:15.435610] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.861 [2024-05-15 01:08:15.435655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.147 01:08:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.147 01:08:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.147 spdk_app_start Round 2 00:05:43.147 01:08:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3920996 /var/tmp/spdk-nbd.sock 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3920996 ']' 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:43.147 01:08:18 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:43.148 01:08:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.148 Malloc0 00:05:43.148 01:08:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.148 Malloc1 00:05:43.148 01:08:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.148 01:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:43.148 /dev/nbd0 00:05:43.407 01:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:43.407 01:08:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.407 1+0 records in 00:05:43.407 1+0 records out 00:05:43.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225286 s, 18.2 MB/s 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:43.407 01:08:18 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:43.407 01:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.407 01:08:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.407 01:08:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:43.407 /dev/nbd1 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:43.407 1+0 records in 00:05:43.407 1+0 records out 00:05:43.407 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232292 s, 17.6 MB/s 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:43.407 01:08:19 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.407 01:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.667 { 00:05:43.667 "nbd_device": "/dev/nbd0", 00:05:43.667 "bdev_name": "Malloc0" 00:05:43.667 }, 00:05:43.667 { 00:05:43.667 "nbd_device": "/dev/nbd1", 00:05:43.667 "bdev_name": "Malloc1" 00:05:43.667 } 00:05:43.667 ]' 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.667 { 00:05:43.667 "nbd_device": "/dev/nbd0", 00:05:43.667 "bdev_name": "Malloc0" 00:05:43.667 }, 00:05:43.667 { 00:05:43.667 "nbd_device": "/dev/nbd1", 00:05:43.667 "bdev_name": "Malloc1" 00:05:43.667 } 00:05:43.667 ]' 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.667 /dev/nbd1' 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.667 /dev/nbd1' 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.667 01:08:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.668 256+0 records in 00:05:43.668 256+0 records out 00:05:43.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104494 s, 100 MB/s 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.668 256+0 records in 00:05:43.668 256+0 records out 00:05:43.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161824 s, 64.8 MB/s 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.668 256+0 records in 00:05:43.668 256+0 records out 00:05:43.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172825 s, 60.7 MB/s 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.668 01:08:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.927 01:08:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.186 01:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:44.446 01:08:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:44.446 01:08:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.706 01:08:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:44.706 [2024-05-15 01:08:20.389247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.965 [2024-05-15 01:08:20.454999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.965 [2024-05-15 01:08:20.455002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.965 [2024-05-15 01:08:20.496993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:44.965 [2024-05-15 01:08:20.497040] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.502 01:08:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3920996 /var/tmp/spdk-nbd.sock 00:05:47.502 01:08:23 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 3920996 ']' 00:05:47.502 01:08:23 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.502 01:08:23 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:47.502 01:08:23 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.502 01:08:23 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:47.502 01:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:47.761 01:08:23 event.app_repeat -- event/event.sh@39 -- # killprocess 3920996 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 3920996 ']' 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 3920996 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3920996 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3920996' 00:05:47.761 killing process with pid 3920996 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@965 -- # kill 3920996 00:05:47.761 01:08:23 event.app_repeat -- common/autotest_common.sh@970 -- # wait 3920996 00:05:48.020 spdk_app_start is called in Round 0. 00:05:48.020 Shutdown signal received, stop current app iteration 00:05:48.020 Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 reinitialization... 00:05:48.020 spdk_app_start is called in Round 1. 00:05:48.020 Shutdown signal received, stop current app iteration 00:05:48.020 Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 reinitialization... 00:05:48.020 spdk_app_start is called in Round 2. 00:05:48.020 Shutdown signal received, stop current app iteration 00:05:48.020 Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 reinitialization... 00:05:48.020 spdk_app_start is called in Round 3. 00:05:48.020 Shutdown signal received, stop current app iteration 00:05:48.020 01:08:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:48.020 01:08:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:48.020 00:05:48.020 real 0m16.223s 00:05:48.020 user 0m34.405s 00:05:48.020 sys 0m2.949s 00:05:48.020 01:08:23 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.020 01:08:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:48.020 ************************************ 00:05:48.020 END TEST app_repeat 00:05:48.020 ************************************ 00:05:48.020 01:08:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:48.020 01:08:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.020 01:08:23 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.020 01:08:23 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.020 01:08:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:48.020 ************************************ 00:05:48.020 START TEST cpu_locks 00:05:48.020 ************************************ 00:05:48.020 01:08:23 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:48.278 * Looking for test storage... 00:05:48.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:48.278 01:08:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:48.278 01:08:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:48.278 01:08:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:48.278 01:08:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:48.278 01:08:23 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.278 01:08:23 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.278 01:08:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.278 ************************************ 00:05:48.278 START TEST default_locks 00:05:48.278 ************************************ 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3924147 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3924147 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3924147 ']' 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:48.278 01:08:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:48.278 [2024-05-15 01:08:23.880865] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:48.278 [2024-05-15 01:08:23.880908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924147 ] 00:05:48.278 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.278 [2024-05-15 01:08:23.948671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.537 [2024-05-15 01:08:24.023212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.105 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.105 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:49.105 01:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3924147 00:05:49.105 01:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3924147 00:05:49.105 01:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:49.364 lslocks: write error 00:05:49.364 01:08:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3924147 00:05:49.364 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 3924147 ']' 00:05:49.364 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 3924147 00:05:49.364 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:49.364 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:49.364 01:08:24 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924147 00:05:49.364 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:49.364 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:49.364 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924147' 00:05:49.364 killing process with pid 3924147 00:05:49.364 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 3924147 00:05:49.364 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 3924147 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3924147 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3924147 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3924147 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 3924147 ']' 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3924147) - No such process 00:05:49.933 ERROR: process (pid: 3924147) is no longer running 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:49.933 00:05:49.933 real 0m1.543s 00:05:49.933 user 0m1.597s 00:05:49.933 sys 0m0.503s 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.933 01:08:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.933 ************************************ 00:05:49.933 END TEST default_locks 00:05:49.933 ************************************ 00:05:49.933 01:08:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:49.933 01:08:25 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:49.933 01:08:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.933 01:08:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.933 ************************************ 00:05:49.933 START TEST default_locks_via_rpc 00:05:49.933 ************************************ 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3924452 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3924452 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3924452 ']' 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:49.933 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.934 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:49.934 01:08:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.934 01:08:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.934 [2024-05-15 01:08:25.496633] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:49.934 [2024-05-15 01:08:25.496677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924452 ] 00:05:49.934 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.934 [2024-05-15 01:08:25.565024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.193 [2024-05-15 01:08:25.639200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3924452 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3924452 00:05:50.762 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3924452 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 3924452 ']' 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 3924452 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924452 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924452' 00:05:51.021 killing process with pid 3924452 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 3924452 00:05:51.021 01:08:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 3924452 00:05:51.589 00:05:51.589 real 0m1.557s 00:05:51.589 user 0m1.622s 00:05:51.589 sys 0m0.526s 00:05:51.589 01:08:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:51.589 01:08:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 ************************************ 00:05:51.589 END TEST default_locks_via_rpc 00:05:51.589 ************************************ 00:05:51.589 01:08:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:51.589 01:08:27 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:51.589 01:08:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:51.589 01:08:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 ************************************ 00:05:51.589 START TEST non_locking_app_on_locked_coremask 00:05:51.589 ************************************ 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3924754 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3924754 /var/tmp/spdk.sock 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3924754 ']' 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.589 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.589 [2024-05-15 01:08:27.134336] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:51.589 [2024-05-15 01:08:27.134380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924754 ] 00:05:51.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.589 [2024-05-15 01:08:27.202616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.589 [2024-05-15 01:08:27.276040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3924916 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3924916 /var/tmp/spdk2.sock 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3924916 ']' 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.528 01:08:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:52.528 [2024-05-15 01:08:27.966421] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:52.528 [2024-05-15 01:08:27.966489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3924916 ] 00:05:52.528 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.528 [2024-05-15 01:08:28.062299] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:52.528 [2024-05-15 01:08:28.062325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.528 [2024-05-15 01:08:28.205763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.097 01:08:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:53.097 01:08:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:53.097 01:08:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3924754 00:05:53.097 01:08:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3924754 00:05:53.097 01:08:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.477 lslocks: write error 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3924754 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3924754 ']' 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3924754 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924754 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924754' 00:05:54.477 killing process with pid 3924754 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3924754 00:05:54.477 01:08:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3924754 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3924916 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3924916 ']' 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3924916 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3924916 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3924916' 00:05:55.046 killing process with pid 3924916 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3924916 00:05:55.046 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3924916 00:05:55.305 00:05:55.305 real 0m3.755s 00:05:55.305 user 0m3.996s 00:05:55.305 sys 0m1.198s 00:05:55.305 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.305 01:08:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.305 ************************************ 00:05:55.305 END TEST non_locking_app_on_locked_coremask 00:05:55.305 ************************************ 00:05:55.305 01:08:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:55.305 01:08:30 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:55.305 01:08:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.305 01:08:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.305 ************************************ 00:05:55.305 START TEST locking_app_on_unlocked_coremask 00:05:55.305 ************************************ 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3925444 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3925444 /var/tmp/spdk.sock 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3925444 ']' 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:55.305 01:08:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.305 [2024-05-15 01:08:30.983562] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:55.305 [2024-05-15 01:08:30.983609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925444 ] 00:05:55.564 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.564 [2024-05-15 01:08:31.053623] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.564 [2024-05-15 01:08:31.053649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.564 [2024-05-15 01:08:31.121710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3925597 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3925597 /var/tmp/spdk2.sock 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3925597 ']' 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:56.185 01:08:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.185 [2024-05-15 01:08:31.823932] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:56.185 [2024-05-15 01:08:31.823986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925597 ] 00:05:56.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.445 [2024-05-15 01:08:31.917432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.445 [2024-05-15 01:08:32.054101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.022 01:08:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:57.022 01:08:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:57.022 01:08:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3925597 00:05:57.022 01:08:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3925597 00:05:57.022 01:08:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.958 lslocks: write error 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3925444 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3925444 ']' 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3925444 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3925444 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3925444' 00:05:57.958 killing process with pid 3925444 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3925444 00:05:57.958 01:08:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3925444 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3925597 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3925597 ']' 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 3925597 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3925597 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3925597' 00:05:58.894 killing process with pid 3925597 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 3925597 00:05:58.894 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 3925597 00:05:59.153 00:05:59.153 real 0m3.759s 00:05:59.153 user 0m3.999s 00:05:59.153 sys 0m1.230s 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.153 ************************************ 00:05:59.153 END TEST locking_app_on_unlocked_coremask 00:05:59.153 ************************************ 00:05:59.153 01:08:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:59.153 01:08:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:59.153 01:08:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:59.153 01:08:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.153 ************************************ 00:05:59.153 START TEST locking_app_on_locked_coremask 00:05:59.153 ************************************ 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3926166 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3926166 /var/tmp/spdk.sock 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3926166 ']' 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.153 01:08:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.153 [2024-05-15 01:08:34.830274] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:05:59.153 [2024-05-15 01:08:34.830318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926166 ] 00:05:59.412 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.412 [2024-05-15 01:08:34.899010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.412 [2024-05-15 01:08:34.972546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3926320 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3926320 /var/tmp/spdk2.sock 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3926320 /var/tmp/spdk2.sock 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3926320 /var/tmp/spdk2.sock 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 3926320 ']' 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:59.980 01:08:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.238 [2024-05-15 01:08:35.680658] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:00.238 [2024-05-15 01:08:35.680711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926320 ] 00:06:00.238 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.238 [2024-05-15 01:08:35.781782] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3926166 has claimed it. 00:06:00.238 [2024-05-15 01:08:35.781824] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3926320) - No such process 00:06:00.805 ERROR: process (pid: 3926320) is no longer running 00:06:00.805 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:00.805 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3926166 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3926166 00:06:00.806 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.371 lslocks: write error 00:06:01.371 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3926166 00:06:01.371 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 3926166 ']' 00:06:01.371 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 3926166 00:06:01.371 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:06:01.371 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:01.372 01:08:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3926166 00:06:01.372 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:01.372 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:01.372 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3926166' 00:06:01.372 killing process with pid 3926166 00:06:01.372 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 3926166 00:06:01.372 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 3926166 00:06:01.939 00:06:01.939 real 0m2.577s 00:06:01.939 user 0m2.799s 00:06:01.939 sys 0m0.851s 00:06:01.939 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:01.939 01:08:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.939 ************************************ 00:06:01.939 END TEST locking_app_on_locked_coremask 00:06:01.939 ************************************ 00:06:01.939 01:08:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:01.939 01:08:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:01.939 01:08:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:01.939 01:08:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.939 ************************************ 00:06:01.939 START TEST locking_overlapped_coremask 00:06:01.939 ************************************ 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3926726 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3926726 /var/tmp/spdk.sock 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3926726 ']' 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:01.939 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.940 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:01.940 01:08:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.940 01:08:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:01.940 [2024-05-15 01:08:37.483763] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:01.940 [2024-05-15 01:08:37.483809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926726 ] 00:06:01.940 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.940 [2024-05-15 01:08:37.551301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.940 [2024-05-15 01:08:37.626391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.940 [2024-05-15 01:08:37.626504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.940 [2024-05-15 01:08:37.626507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3926746 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3926746 /var/tmp/spdk2.sock 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3926746 /var/tmp/spdk2.sock 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3926746 /var/tmp/spdk2.sock 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 3926746 ']' 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:02.877 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.877 [2024-05-15 01:08:38.324266] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:02.877 [2024-05-15 01:08:38.324319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926746 ] 00:06:02.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.877 [2024-05-15 01:08:38.425248] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3926726 has claimed it. 00:06:02.877 [2024-05-15 01:08:38.425287] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:03.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (3926746) - No such process 00:06:03.444 ERROR: process (pid: 3926746) is no longer running 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3926726 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 3926726 ']' 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 3926726 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:03.444 01:08:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3926726 00:06:03.444 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:03.444 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:03.444 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3926726' 00:06:03.444 killing process with pid 3926726 00:06:03.444 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 3926726 00:06:03.444 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 3926726 00:06:03.703 00:06:03.703 real 0m1.911s 00:06:03.703 user 0m5.302s 00:06:03.703 sys 0m0.454s 00:06:03.703 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:03.703 01:08:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.703 ************************************ 00:06:03.703 END TEST locking_overlapped_coremask 00:06:03.703 ************************************ 00:06:03.703 01:08:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:03.703 01:08:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:03.703 01:08:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.703 01:08:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.962 ************************************ 00:06:03.962 START TEST locking_overlapped_coremask_via_rpc 00:06:03.962 ************************************ 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3927041 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3927041 /var/tmp/spdk.sock 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3927041 ']' 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:03.963 01:08:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.963 [2024-05-15 01:08:39.483123] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:03.963 [2024-05-15 01:08:39.483166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927041 ] 00:06:03.963 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.963 [2024-05-15 01:08:39.551668] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.963 [2024-05-15 01:08:39.551692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.963 [2024-05-15 01:08:39.627144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.963 [2024-05-15 01:08:39.627229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.963 [2024-05-15 01:08:39.627230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3927196 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3927196 /var/tmp/spdk2.sock 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3927196 ']' 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:04.897 01:08:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.897 [2024-05-15 01:08:40.340134] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:04.897 [2024-05-15 01:08:40.340188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927196 ] 00:06:04.897 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.897 [2024-05-15 01:08:40.443337] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.897 [2024-05-15 01:08:40.443369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.156 [2024-05-15 01:08:40.594396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.156 [2024-05-15 01:08:40.594516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.156 [2024-05-15 01:08:40.594517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.722 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.722 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:05.722 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:05.722 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.723 [2024-05-15 01:08:41.169269] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3927041 has claimed it. 00:06:05.723 request: 00:06:05.723 { 00:06:05.723 "method": "framework_enable_cpumask_locks", 00:06:05.723 "req_id": 1 00:06:05.723 } 00:06:05.723 Got JSON-RPC error response 00:06:05.723 response: 00:06:05.723 { 00:06:05.723 "code": -32603, 00:06:05.723 "message": "Failed to claim CPU core: 2" 00:06:05.723 } 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3927041 /var/tmp/spdk.sock 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3927041 ']' 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3927196 /var/tmp/spdk2.sock 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 3927196 ']' 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:05.723 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:05.983 00:06:05.983 real 0m2.113s 00:06:05.983 user 0m0.835s 00:06:05.983 sys 0m0.207s 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.983 01:08:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.983 ************************************ 00:06:05.983 END TEST locking_overlapped_coremask_via_rpc 00:06:05.983 ************************************ 00:06:05.983 01:08:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:05.983 01:08:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3927041 ]] 00:06:05.983 01:08:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3927041 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3927041 ']' 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3927041 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3927041 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3927041' 00:06:05.983 killing process with pid 3927041 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3927041 00:06:05.983 01:08:41 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3927041 00:06:06.551 01:08:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3927196 ]] 00:06:06.551 01:08:41 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3927196 00:06:06.551 01:08:41 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3927196 ']' 00:06:06.551 01:08:41 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3927196 00:06:06.551 01:08:41 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:06:06.551 01:08:41 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:06.551 01:08:41 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3927196 00:06:06.551 01:08:42 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:06.551 01:08:42 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:06.551 01:08:42 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3927196' 00:06:06.551 killing process with pid 3927196 00:06:06.551 01:08:42 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 3927196 00:06:06.551 01:08:42 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 3927196 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3927041 ]] 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3927041 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3927041 ']' 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3927041 00:06:06.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3927041) - No such process 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3927041 is not found' 00:06:06.810 Process with pid 3927041 is not found 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3927196 ]] 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3927196 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 3927196 ']' 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 3927196 00:06:06.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (3927196) - No such process 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 3927196 is not found' 00:06:06.810 Process with pid 3927196 is not found 00:06:06.810 01:08:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:06.810 00:06:06.810 real 0m18.705s 00:06:06.810 user 0m30.925s 00:06:06.810 sys 0m5.990s 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.810 01:08:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.810 ************************************ 00:06:06.810 END TEST cpu_locks 00:06:06.810 ************************************ 00:06:06.810 00:06:06.810 real 0m44.394s 00:06:06.810 user 1m22.483s 00:06:06.810 sys 0m10.055s 00:06:06.810 01:08:42 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:06.810 01:08:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:06.810 ************************************ 00:06:06.810 END TEST event 00:06:06.810 ************************************ 00:06:06.810 01:08:42 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:06.810 01:08:42 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.810 01:08:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.810 01:08:42 -- common/autotest_common.sh@10 -- # set +x 00:06:07.069 ************************************ 00:06:07.069 START TEST thread 00:06:07.069 ************************************ 00:06:07.069 01:08:42 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.069 * Looking for test storage... 00:06:07.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:07.069 01:08:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.069 01:08:42 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:07.069 01:08:42 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.069 01:08:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.069 ************************************ 00:06:07.069 START TEST thread_poller_perf 00:06:07.069 ************************************ 00:06:07.069 01:08:42 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.069 [2024-05-15 01:08:42.700570] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:07.069 [2024-05-15 01:08:42.700647] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927679 ] 00:06:07.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.328 [2024-05-15 01:08:42.773230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.328 [2024-05-15 01:08:42.841980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.328 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:08.264 ====================================== 00:06:08.264 busy:2509303696 (cyc) 00:06:08.264 total_run_count: 429000 00:06:08.264 tsc_hz: 2500000000 (cyc) 00:06:08.264 ====================================== 00:06:08.264 poller_cost: 5849 (cyc), 2339 (nsec) 00:06:08.264 00:06:08.264 real 0m1.256s 00:06:08.264 user 0m1.164s 00:06:08.264 sys 0m0.087s 00:06:08.264 01:08:43 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.264 01:08:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.264 ************************************ 00:06:08.264 END TEST thread_poller_perf 00:06:08.264 ************************************ 00:06:08.523 01:08:43 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.523 01:08:43 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:08.523 01:08:43 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.523 01:08:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.523 ************************************ 00:06:08.523 START TEST thread_poller_perf 00:06:08.523 ************************************ 00:06:08.523 01:08:44 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:08.523 [2024-05-15 01:08:44.049777] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:08.523 [2024-05-15 01:08:44.049864] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3927961 ] 00:06:08.523 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.523 [2024-05-15 01:08:44.124217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.523 [2024-05-15 01:08:44.193247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.523 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:09.902 ====================================== 00:06:09.902 busy:2501628054 (cyc) 00:06:09.902 total_run_count: 5693000 00:06:09.902 tsc_hz: 2500000000 (cyc) 00:06:09.902 ====================================== 00:06:09.902 poller_cost: 439 (cyc), 175 (nsec) 00:06:09.902 00:06:09.902 real 0m1.248s 00:06:09.902 user 0m1.150s 00:06:09.902 sys 0m0.094s 00:06:09.902 01:08:45 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.902 01:08:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.902 ************************************ 00:06:09.902 END TEST thread_poller_perf 00:06:09.902 ************************************ 00:06:09.902 01:08:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:09.902 00:06:09.902 real 0m2.797s 00:06:09.902 user 0m2.424s 00:06:09.902 sys 0m0.378s 00:06:09.902 01:08:45 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.902 01:08:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.902 ************************************ 00:06:09.902 END TEST thread 00:06:09.902 ************************************ 00:06:09.902 01:08:45 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:09.902 01:08:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.902 01:08:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.902 01:08:45 -- common/autotest_common.sh@10 -- # set +x 00:06:09.902 ************************************ 00:06:09.902 START TEST accel 00:06:09.902 ************************************ 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:09.902 * Looking for test storage... 00:06:09.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:09.902 01:08:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:09.902 01:08:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:09.902 01:08:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.902 01:08:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3928286 00:06:09.902 01:08:45 accel -- accel/accel.sh@63 -- # waitforlisten 3928286 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@827 -- # '[' -z 3928286 ']' 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.902 01:08:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.902 01:08:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:09.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.902 01:08:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.902 01:08:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.902 01:08:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.902 01:08:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.902 01:08:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.902 01:08:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.902 01:08:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:09.902 01:08:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:09.902 [2024-05-15 01:08:45.557651] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:09.902 [2024-05-15 01:08:45.557707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928286 ] 00:06:09.902 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.161 [2024-05-15 01:08:45.627756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.161 [2024-05-15 01:08:45.696954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.729 01:08:46 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.729 01:08:46 accel -- common/autotest_common.sh@860 -- # return 0 00:06:10.729 01:08:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:10.729 01:08:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:10.729 01:08:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:10.729 01:08:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:10.729 01:08:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:10.729 01:08:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:10.729 01:08:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:10.729 01:08:46 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.729 01:08:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.729 01:08:46 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.729 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.729 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.729 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.730 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.730 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.730 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.730 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.730 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.730 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.730 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.730 01:08:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.730 01:08:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.730 01:08:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.730 01:08:46 accel -- accel/accel.sh@75 -- # killprocess 3928286 00:06:10.730 01:08:46 accel -- common/autotest_common.sh@946 -- # '[' -z 3928286 ']' 00:06:10.730 01:08:46 accel -- common/autotest_common.sh@950 -- # kill -0 3928286 00:06:10.730 01:08:46 accel -- common/autotest_common.sh@951 -- # uname 00:06:10.730 01:08:46 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:10.730 01:08:46 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3928286 00:06:10.989 01:08:46 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:10.989 01:08:46 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:10.989 01:08:46 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3928286' 00:06:10.989 killing process with pid 3928286 00:06:10.989 01:08:46 accel -- common/autotest_common.sh@965 -- # kill 3928286 00:06:10.989 01:08:46 accel -- common/autotest_common.sh@970 -- # wait 3928286 00:06:11.264 01:08:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:11.264 01:08:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:11.264 01:08:46 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:11.264 01:08:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.264 01:08:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.264 01:08:46 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:06:11.264 01:08:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:11.264 01:08:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:11.265 01:08:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:11.265 01:08:46 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.265 01:08:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:11.265 01:08:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:11.265 01:08:46 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:11.265 01:08:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.265 01:08:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.584 ************************************ 00:06:11.584 START TEST accel_missing_filename 00:06:11.584 ************************************ 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.584 01:08:46 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:11.584 01:08:46 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:11.584 [2024-05-15 01:08:46.989535] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:11.584 [2024-05-15 01:08:46.989601] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928586 ] 00:06:11.584 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.584 [2024-05-15 01:08:47.059882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.584 [2024-05-15 01:08:47.129771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.584 [2024-05-15 01:08:47.169901] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.584 [2024-05-15 01:08:47.228659] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:11.843 A filename is required. 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:11.843 00:06:11.843 real 0m0.360s 00:06:11.843 user 0m0.262s 00:06:11.843 sys 0m0.135s 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.843 01:08:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:11.843 ************************************ 00:06:11.843 END TEST accel_missing_filename 00:06:11.843 ************************************ 00:06:11.843 01:08:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.843 01:08:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:11.843 01:08:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.843 01:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.843 ************************************ 00:06:11.843 START TEST accel_compress_verify 00:06:11.843 ************************************ 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.843 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:11.843 01:08:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:11.843 [2024-05-15 01:08:47.437164] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:11.843 [2024-05-15 01:08:47.437235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928619 ] 00:06:11.843 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.843 [2024-05-15 01:08:47.506772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.103 [2024-05-15 01:08:47.578015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.103 [2024-05-15 01:08:47.618783] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.103 [2024-05-15 01:08:47.678489] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:06:12.103 00:06:12.103 Compression does not support the verify option, aborting. 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.103 00:06:12.103 real 0m0.362s 00:06:12.103 user 0m0.262s 00:06:12.103 sys 0m0.136s 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.103 01:08:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:12.103 ************************************ 00:06:12.103 END TEST accel_compress_verify 00:06:12.103 ************************************ 00:06:12.363 01:08:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:12.363 01:08:47 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:12.363 01:08:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.363 01:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.363 ************************************ 00:06:12.363 START TEST accel_wrong_workload 00:06:12.363 ************************************ 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.363 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:12.363 01:08:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:12.363 Unsupported workload type: foobar 00:06:12.363 [2024-05-15 01:08:47.887628] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:12.363 accel_perf options: 00:06:12.363 [-h help message] 00:06:12.363 [-q queue depth per core] 00:06:12.363 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:12.363 [-T number of threads per core 00:06:12.363 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:12.363 [-t time in seconds] 00:06:12.363 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:12.363 [ dif_verify, , dif_generate, dif_generate_copy 00:06:12.363 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:12.363 [-l for compress/decompress workloads, name of uncompressed input file 00:06:12.363 [-S for crc32c workload, use this seed value (default 0) 00:06:12.364 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:12.364 [-f for fill workload, use this BYTE value (default 255) 00:06:12.364 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:12.364 [-y verify result if this switch is on] 00:06:12.364 [-a tasks to allocate per core (default: same value as -q)] 00:06:12.364 Can be used to spread operations across a wider range of memory. 00:06:12.364 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:12.364 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.364 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.364 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.364 00:06:12.364 real 0m0.038s 00:06:12.364 user 0m0.017s 00:06:12.364 sys 0m0.021s 00:06:12.364 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.364 01:08:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:12.364 ************************************ 00:06:12.364 END TEST accel_wrong_workload 00:06:12.364 ************************************ 00:06:12.364 Error: writing output failed: Broken pipe 00:06:12.364 01:08:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:12.364 01:08:47 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:06:12.364 01:08:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.364 01:08:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.364 ************************************ 00:06:12.364 START TEST accel_negative_buffers 00:06:12.364 ************************************ 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.364 01:08:47 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:12.364 01:08:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:12.364 -x option must be non-negative. 00:06:12.364 [2024-05-15 01:08:48.009395] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:12.364 accel_perf options: 00:06:12.364 [-h help message] 00:06:12.364 [-q queue depth per core] 00:06:12.364 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:12.364 [-T number of threads per core 00:06:12.364 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:12.364 [-t time in seconds] 00:06:12.364 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:12.364 [ dif_verify, , dif_generate, dif_generate_copy 00:06:12.364 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:12.364 [-l for compress/decompress workloads, name of uncompressed input file 00:06:12.364 [-S for crc32c workload, use this seed value (default 0) 00:06:12.364 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:12.364 [-f for fill workload, use this BYTE value (default 255) 00:06:12.364 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:12.364 [-y verify result if this switch is on] 00:06:12.364 [-a tasks to allocate per core (default: same value as -q)] 00:06:12.364 Can be used to spread operations across a wider range of memory. 00:06:12.364 01:08:48 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:12.364 01:08:48 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.364 01:08:48 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.364 01:08:48 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.364 00:06:12.364 real 0m0.037s 00:06:12.364 user 0m0.018s 00:06:12.364 sys 0m0.019s 00:06:12.364 01:08:48 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.364 01:08:48 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:12.364 ************************************ 00:06:12.364 END TEST accel_negative_buffers 00:06:12.364 ************************************ 00:06:12.364 Error: writing output failed: Broken pipe 00:06:12.364 01:08:48 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:12.624 01:08:48 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:12.624 01:08:48 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.624 01:08:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.624 ************************************ 00:06:12.624 START TEST accel_crc32c 00:06:12.624 ************************************ 00:06:12.624 01:08:48 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:12.624 [2024-05-15 01:08:48.125920] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:12.624 [2024-05-15 01:08:48.125981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3928881 ] 00:06:12.624 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.624 [2024-05-15 01:08:48.196263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.624 [2024-05-15 01:08:48.268257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.624 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.883 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.884 01:08:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.821 01:08:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:13.822 01:08:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.822 00:06:13.822 real 0m1.365s 00:06:13.822 user 0m1.245s 00:06:13.822 sys 0m0.134s 00:06:13.822 01:08:49 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.822 01:08:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:13.822 ************************************ 00:06:13.822 END TEST accel_crc32c 00:06:13.822 ************************************ 00:06:13.822 01:08:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:13.822 01:08:49 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:13.822 01:08:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.822 01:08:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.081 ************************************ 00:06:14.081 START TEST accel_crc32c_C2 00:06:14.081 ************************************ 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.081 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:14.082 [2024-05-15 01:08:49.583799] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:14.082 [2024-05-15 01:08:49.583870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929117 ] 00:06:14.082 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.082 [2024-05-15 01:08:49.655243] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.082 [2024-05-15 01:08:49.724564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.082 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.341 01:08:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.281 00:06:15.281 real 0m1.369s 00:06:15.281 user 0m1.254s 00:06:15.281 sys 0m0.129s 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:15.281 01:08:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:15.281 ************************************ 00:06:15.281 END TEST accel_crc32c_C2 00:06:15.281 ************************************ 00:06:15.281 01:08:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:15.281 01:08:50 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:15.281 01:08:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.281 01:08:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.541 ************************************ 00:06:15.541 START TEST accel_copy 00:06:15.541 ************************************ 00:06:15.541 01:08:51 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:15.541 [2024-05-15 01:08:51.031012] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:15.541 [2024-05-15 01:08:51.031074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929374 ] 00:06:15.541 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.541 [2024-05-15 01:08:51.101319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.541 [2024-05-15 01:08:51.169527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:15.541 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.542 01:08:51 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:16.922 01:08:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.922 00:06:16.922 real 0m1.362s 00:06:16.922 user 0m1.246s 00:06:16.922 sys 0m0.131s 00:06:16.922 01:08:52 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.922 01:08:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:16.922 ************************************ 00:06:16.922 END TEST accel_copy 00:06:16.922 ************************************ 00:06:16.922 01:08:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.922 01:08:52 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:16.922 01:08:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.922 01:08:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.922 ************************************ 00:06:16.922 START TEST accel_fill 00:06:16.922 ************************************ 00:06:16.922 01:08:52 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.922 01:08:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:16.923 01:08:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:16.923 [2024-05-15 01:08:52.482766] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:16.923 [2024-05-15 01:08:52.482822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929607 ] 00:06:16.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.923 [2024-05-15 01:08:52.552028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.183 [2024-05-15 01:08:52.621658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.183 01:08:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 01:08:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.121 01:08:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.121 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.121 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.121 01:08:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.379 01:08:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.380 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.380 01:08:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.380 01:08:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.380 01:08:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:18.380 01:08:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.380 00:06:18.380 real 0m1.365s 00:06:18.380 user 0m1.242s 00:06:18.380 sys 0m0.136s 00:06:18.380 01:08:53 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.380 01:08:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:18.380 ************************************ 00:06:18.380 END TEST accel_fill 00:06:18.380 ************************************ 00:06:18.380 01:08:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:18.380 01:08:53 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:18.380 01:08:53 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.380 01:08:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.380 ************************************ 00:06:18.380 START TEST accel_copy_crc32c 00:06:18.380 ************************************ 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:18.380 01:08:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:18.380 [2024-05-15 01:08:53.926122] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:18.380 [2024-05-15 01:08:53.926179] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3929856 ] 00:06:18.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.380 [2024-05-15 01:08:53.996027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.380 [2024-05-15 01:08:54.068262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.639 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.640 01:08:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.576 00:06:19.576 real 0m1.359s 00:06:19.576 user 0m1.249s 00:06:19.576 sys 0m0.124s 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:19.576 01:08:55 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:19.576 ************************************ 00:06:19.576 END TEST accel_copy_crc32c 00:06:19.576 ************************************ 00:06:19.835 01:08:55 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:19.835 01:08:55 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:19.835 01:08:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.835 01:08:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.835 ************************************ 00:06:19.835 START TEST accel_copy_crc32c_C2 00:06:19.835 ************************************ 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.835 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.836 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.836 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.836 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:19.836 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:19.836 [2024-05-15 01:08:55.385353] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:19.836 [2024-05-15 01:08:55.385411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930117 ] 00:06:19.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.836 [2024-05-15 01:08:55.454964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.836 [2024-05-15 01:08:55.522961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.095 01:08:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.033 00:06:21.033 real 0m1.364s 00:06:21.033 user 0m1.244s 00:06:21.033 sys 0m0.133s 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.033 01:08:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:21.033 ************************************ 00:06:21.033 END TEST accel_copy_crc32c_C2 00:06:21.033 ************************************ 00:06:21.292 01:08:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:21.292 01:08:56 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:21.292 01:08:56 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.292 01:08:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.292 ************************************ 00:06:21.292 START TEST accel_dualcast 00:06:21.292 ************************************ 00:06:21.292 01:08:56 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.292 01:08:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.293 01:08:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:21.293 01:08:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:21.293 [2024-05-15 01:08:56.839115] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:21.293 [2024-05-15 01:08:56.839174] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930400 ] 00:06:21.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.293 [2024-05-15 01:08:56.909962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.293 [2024-05-15 01:08:56.980885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.552 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.553 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.553 01:08:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.553 01:08:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.553 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.553 01:08:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.490 01:08:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.490 01:08:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.490 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.490 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.490 01:08:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.490 01:08:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:22.491 01:08:58 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.491 00:06:22.491 real 0m1.370s 00:06:22.491 user 0m1.261s 00:06:22.491 sys 0m0.123s 00:06:22.491 01:08:58 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.491 01:08:58 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:22.491 ************************************ 00:06:22.491 END TEST accel_dualcast 00:06:22.491 ************************************ 00:06:22.751 01:08:58 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:22.751 01:08:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:22.751 01:08:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.751 01:08:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.751 ************************************ 00:06:22.751 START TEST accel_compare 00:06:22.751 ************************************ 00:06:22.751 01:08:58 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:22.751 01:08:58 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:22.751 [2024-05-15 01:08:58.286274] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:22.751 [2024-05-15 01:08:58.286323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930679 ] 00:06:22.751 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.751 [2024-05-15 01:08:58.348662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.751 [2024-05-15 01:08:58.418441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.011 01:08:58 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:23.950 01:08:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:23.950 00:06:23.950 real 0m1.348s 00:06:23.950 user 0m1.239s 00:06:23.950 sys 0m0.124s 00:06:23.950 01:08:59 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.950 01:08:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:23.950 ************************************ 00:06:23.950 END TEST accel_compare 00:06:23.950 ************************************ 00:06:24.210 01:08:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:24.210 01:08:59 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:06:24.210 01:08:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.210 01:08:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.210 ************************************ 00:06:24.210 START TEST accel_xor 00:06:24.210 ************************************ 00:06:24.210 01:08:59 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:24.210 01:08:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:24.210 [2024-05-15 01:08:59.733531] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:24.210 [2024-05-15 01:08:59.733586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930964 ] 00:06:24.210 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.210 [2024-05-15 01:08:59.802209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.210 [2024-05-15 01:08:59.870275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.469 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.470 01:08:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.408 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.408 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.408 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:25.409 01:09:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.409 00:06:25.409 real 0m1.366s 00:06:25.409 user 0m1.245s 00:06:25.409 sys 0m0.132s 00:06:25.409 01:09:01 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.409 01:09:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:25.409 ************************************ 00:06:25.409 END TEST accel_xor 00:06:25.409 ************************************ 00:06:25.669 01:09:01 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:25.669 01:09:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:25.669 01:09:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.669 01:09:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.669 ************************************ 00:06:25.669 START TEST accel_xor 00:06:25.669 ************************************ 00:06:25.669 01:09:01 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:25.669 01:09:01 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:25.669 [2024-05-15 01:09:01.185591] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:25.669 [2024-05-15 01:09:01.185670] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931337 ] 00:06:25.669 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.669 [2024-05-15 01:09:01.256534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.669 [2024-05-15 01:09:01.325039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.928 01:09:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:26.896 01:09:02 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.896 00:06:26.896 real 0m1.367s 00:06:26.896 user 0m1.252s 00:06:26.896 sys 0m0.127s 00:06:26.896 01:09:02 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.896 01:09:02 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:26.896 ************************************ 00:06:26.896 END TEST accel_xor 00:06:26.896 ************************************ 00:06:26.896 01:09:02 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:26.896 01:09:02 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:26.896 01:09:02 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:26.896 01:09:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.156 ************************************ 00:06:27.156 START TEST accel_dif_verify 00:06:27.156 ************************************ 00:06:27.156 01:09:02 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:27.156 [2024-05-15 01:09:02.639091] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:27.156 [2024-05-15 01:09:02.639151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3931663 ] 00:06:27.156 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.156 [2024-05-15 01:09:02.706629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.156 [2024-05-15 01:09:02.774360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.156 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.157 01:09:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.536 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:28.537 01:09:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.537 00:06:28.537 real 0m1.359s 00:06:28.537 user 0m1.249s 00:06:28.537 sys 0m0.124s 00:06:28.537 01:09:03 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.537 01:09:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:28.537 ************************************ 00:06:28.537 END TEST accel_dif_verify 00:06:28.537 ************************************ 00:06:28.537 01:09:04 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:28.537 01:09:04 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:28.537 01:09:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.537 01:09:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.537 ************************************ 00:06:28.537 START TEST accel_dif_generate 00:06:28.537 ************************************ 00:06:28.537 01:09:04 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:28.537 01:09:04 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:28.537 [2024-05-15 01:09:04.087665] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:28.537 [2024-05-15 01:09:04.087725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932007 ] 00:06:28.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.537 [2024-05-15 01:09:04.157268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.537 [2024-05-15 01:09:04.225542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.797 01:09:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:29.735 01:09:05 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.735 00:06:29.735 real 0m1.362s 00:06:29.735 user 0m1.254s 00:06:29.735 sys 0m0.123s 00:06:29.735 01:09:05 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:29.735 01:09:05 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:29.735 ************************************ 00:06:29.735 END TEST accel_dif_generate 00:06:29.735 ************************************ 00:06:29.995 01:09:05 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:29.995 01:09:05 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:29.995 01:09:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:29.995 01:09:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.995 ************************************ 00:06:29.995 START TEST accel_dif_generate_copy 00:06:29.995 ************************************ 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:29.995 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:29.995 [2024-05-15 01:09:05.540768] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:29.995 [2024-05-15 01:09:05.540830] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932614 ] 00:06:29.995 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.995 [2024-05-15 01:09:05.611751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.995 [2024-05-15 01:09:05.682148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.255 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.256 01:09:05 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.194 00:06:31.194 real 0m1.370s 00:06:31.194 user 0m1.248s 00:06:31.194 sys 0m0.135s 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.194 01:09:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.194 ************************************ 00:06:31.194 END TEST accel_dif_generate_copy 00:06:31.194 ************************************ 00:06:31.453 01:09:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:31.453 01:09:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.453 01:09:06 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:31.453 01:09:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.453 01:09:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 ************************************ 00:06:31.453 START TEST accel_comp 00:06:31.453 ************************************ 00:06:31.453 01:09:06 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:31.453 01:09:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:31.453 [2024-05-15 01:09:06.998916] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:31.453 [2024-05-15 01:09:06.998975] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3932951 ] 00:06:31.453 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.453 [2024-05-15 01:09:07.068286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.453 [2024-05-15 01:09:07.138458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.712 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.713 01:09:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:32.650 01:09:08 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.650 00:06:32.650 real 0m1.368s 00:06:32.650 user 0m1.250s 00:06:32.650 sys 0m0.134s 00:06:32.650 01:09:08 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.650 01:09:08 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:32.650 ************************************ 00:06:32.650 END TEST accel_comp 00:06:32.650 ************************************ 00:06:32.910 01:09:08 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.910 01:09:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:32.910 01:09:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.910 01:09:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.910 ************************************ 00:06:32.910 START TEST accel_decomp 00:06:32.910 ************************************ 00:06:32.910 01:09:08 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:32.910 01:09:08 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:32.910 [2024-05-15 01:09:08.455922] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:32.910 [2024-05-15 01:09:08.455984] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933232 ] 00:06:32.910 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.910 [2024-05-15 01:09:08.525368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.910 [2024-05-15 01:09:08.596293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.170 01:09:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.106 01:09:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.107 01:09:09 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.107 00:06:34.107 real 0m1.370s 00:06:34.107 user 0m1.249s 00:06:34.107 sys 0m0.136s 00:06:34.107 01:09:09 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.107 01:09:09 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:34.107 ************************************ 00:06:34.107 END TEST accel_decomp 00:06:34.107 ************************************ 00:06:34.366 01:09:09 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.366 01:09:09 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:34.366 01:09:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.366 01:09:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.366 ************************************ 00:06:34.366 START TEST accel_decmop_full 00:06:34.366 ************************************ 00:06:34.366 01:09:09 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:34.366 01:09:09 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:34.366 [2024-05-15 01:09:09.913257] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:34.366 [2024-05-15 01:09:09.913319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933511 ] 00:06:34.366 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.366 [2024-05-15 01:09:09.983147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.626 [2024-05-15 01:09:10.070615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.626 01:09:10 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:36.006 01:09:11 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.006 00:06:36.006 real 0m1.395s 00:06:36.006 user 0m1.276s 00:06:36.006 sys 0m0.134s 00:06:36.006 01:09:11 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:36.006 01:09:11 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:36.006 ************************************ 00:06:36.006 END TEST accel_decmop_full 00:06:36.006 ************************************ 00:06:36.006 01:09:11 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.006 01:09:11 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:36.006 01:09:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:36.006 01:09:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.006 ************************************ 00:06:36.006 START TEST accel_decomp_mcore 00:06:36.006 ************************************ 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:36.006 [2024-05-15 01:09:11.400912] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:36.006 [2024-05-15 01:09:11.400981] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3933803 ] 00:06:36.006 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.006 [2024-05-15 01:09:11.472444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.006 [2024-05-15 01:09:11.545078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.006 [2024-05-15 01:09:11.545175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.006 [2024-05-15 01:09:11.545257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.006 [2024-05-15 01:09:11.545260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.006 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.007 01:09:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.386 00:06:37.386 real 0m1.384s 00:06:37.386 user 0m4.592s 00:06:37.386 sys 0m0.140s 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.386 01:09:12 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 END TEST accel_decomp_mcore 00:06:37.386 ************************************ 00:06:37.386 01:09:12 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.386 01:09:12 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:37.386 01:09:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.386 01:09:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.386 ************************************ 00:06:37.386 START TEST accel_decomp_full_mcore 00:06:37.386 ************************************ 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:37.386 01:09:12 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:37.386 [2024-05-15 01:09:12.873884] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:37.386 [2024-05-15 01:09:12.873944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934085 ] 00:06:37.386 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.386 [2024-05-15 01:09:12.944503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.386 [2024-05-15 01:09:13.018044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.386 [2024-05-15 01:09:13.018141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.386 [2024-05-15 01:09:13.018238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.386 [2024-05-15 01:09:13.018244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.386 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.387 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.645 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.645 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.645 01:09:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.582 00:06:38.582 real 0m1.391s 00:06:38.582 user 0m4.615s 00:06:38.582 sys 0m0.137s 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.582 01:09:14 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:38.582 ************************************ 00:06:38.582 END TEST accel_decomp_full_mcore 00:06:38.582 ************************************ 00:06:38.841 01:09:14 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:38.841 01:09:14 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:38.841 01:09:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.841 01:09:14 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.841 ************************************ 00:06:38.841 START TEST accel_decomp_mthread 00:06:38.841 ************************************ 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:38.841 [2024-05-15 01:09:14.351661] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:38.841 [2024-05-15 01:09:14.351715] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934379 ] 00:06:38.841 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.841 [2024-05-15 01:09:14.419515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.841 [2024-05-15 01:09:14.487330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.841 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:38.842 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:38.842 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:38.842 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:38.842 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.101 01:09:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.039 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.039 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.039 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.040 00:06:40.040 real 0m1.367s 00:06:40.040 user 0m1.253s 00:06:40.040 sys 0m0.128s 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:40.040 01:09:15 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:40.040 ************************************ 00:06:40.040 END TEST accel_decomp_mthread 00:06:40.040 ************************************ 00:06:40.299 01:09:15 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.299 01:09:15 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:40.299 01:09:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:40.299 01:09:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.299 ************************************ 00:06:40.299 START TEST accel_decomp_full_mthread 00:06:40.299 ************************************ 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:40.299 [2024-05-15 01:09:15.809307] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:40.299 [2024-05-15 01:09:15.809366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934639 ] 00:06:40.299 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.299 [2024-05-15 01:09:15.877323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.299 [2024-05-15 01:09:15.946374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.299 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.559 01:09:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.497 00:06:41.497 real 0m1.392s 00:06:41.497 user 0m1.280s 00:06:41.497 sys 0m0.127s 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.497 01:09:17 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:41.497 ************************************ 00:06:41.497 END TEST accel_decomp_full_mthread 00:06:41.497 ************************************ 00:06:41.756 01:09:17 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:41.756 01:09:17 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:41.756 01:09:17 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:41.756 01:09:17 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:41.756 01:09:17 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.756 01:09:17 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.756 01:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.756 01:09:17 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.756 01:09:17 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.756 01:09:17 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.756 01:09:17 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.756 01:09:17 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:41.756 01:09:17 accel -- accel/accel.sh@41 -- # jq -r . 00:06:41.756 ************************************ 00:06:41.756 START TEST accel_dif_functional_tests 00:06:41.757 ************************************ 00:06:41.757 01:09:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:41.757 [2024-05-15 01:09:17.304230] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:41.757 [2024-05-15 01:09:17.304272] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3934932 ] 00:06:41.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.757 [2024-05-15 01:09:17.371309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.757 [2024-05-15 01:09:17.442458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.757 [2024-05-15 01:09:17.442556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.757 [2024-05-15 01:09:17.442556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.050 00:06:42.050 00:06:42.050 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.050 http://cunit.sourceforge.net/ 00:06:42.050 00:06:42.050 00:06:42.050 Suite: accel_dif 00:06:42.050 Test: verify: DIF generated, GUARD check ...passed 00:06:42.050 Test: verify: DIF generated, APPTAG check ...passed 00:06:42.050 Test: verify: DIF generated, REFTAG check ...passed 00:06:42.050 Test: verify: DIF not generated, GUARD check ...[2024-05-15 01:09:17.511041] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:42.050 [2024-05-15 01:09:17.511086] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:42.050 passed 00:06:42.050 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 01:09:17.511118] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:42.050 [2024-05-15 01:09:17.511138] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:42.050 passed 00:06:42.050 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 01:09:17.511158] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:42.050 [2024-05-15 01:09:17.511175] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:42.050 passed 00:06:42.050 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:42.050 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 01:09:17.511239] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:42.050 passed 00:06:42.050 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:42.050 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:42.050 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:42.050 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 01:09:17.511343] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:42.050 passed 00:06:42.050 Test: generate copy: DIF generated, GUARD check ...passed 00:06:42.050 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:42.050 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:42.050 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:42.050 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:42.050 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:42.050 Test: generate copy: iovecs-len validate ...[2024-05-15 01:09:17.511518] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:42.050 passed 00:06:42.050 Test: generate copy: buffer alignment validate ...passed 00:06:42.050 00:06:42.050 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.050 suites 1 1 n/a 0 0 00:06:42.050 tests 20 20 20 0 0 00:06:42.050 asserts 204 204 204 0 n/a 00:06:42.050 00:06:42.050 Elapsed time = 0.002 seconds 00:06:42.050 00:06:42.050 real 0m0.439s 00:06:42.050 user 0m0.604s 00:06:42.050 sys 0m0.153s 00:06:42.050 01:09:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.050 01:09:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:42.050 ************************************ 00:06:42.050 END TEST accel_dif_functional_tests 00:06:42.050 ************************************ 00:06:42.336 00:06:42.336 real 0m32.344s 00:06:42.336 user 0m35.274s 00:06:42.336 sys 0m5.072s 00:06:42.336 01:09:17 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.336 01:09:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.336 ************************************ 00:06:42.336 END TEST accel 00:06:42.336 ************************************ 00:06:42.337 01:09:17 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:42.337 01:09:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:42.337 01:09:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.337 01:09:17 -- common/autotest_common.sh@10 -- # set +x 00:06:42.337 ************************************ 00:06:42.337 START TEST accel_rpc 00:06:42.337 ************************************ 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:42.337 * Looking for test storage... 00:06:42.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:42.337 01:09:17 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.337 01:09:17 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3935014 00:06:42.337 01:09:17 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3935014 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 3935014 ']' 00:06:42.337 01:09:17 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:42.337 01:09:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.337 [2024-05-15 01:09:17.977047] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:42.337 [2024-05-15 01:09:17.977095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935014 ] 00:06:42.337 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.597 [2024-05-15 01:09:18.045954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.597 [2024-05-15 01:09:18.121880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.166 01:09:18 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:43.166 01:09:18 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:43.166 01:09:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:43.166 01:09:18 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:43.166 01:09:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:43.166 01:09:18 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:43.166 01:09:18 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:43.166 01:09:18 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.166 01:09:18 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.166 01:09:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.166 ************************************ 00:06:43.166 START TEST accel_assign_opcode 00:06:43.166 ************************************ 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:43.166 [2024-05-15 01:09:18.803903] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:43.166 [2024-05-15 01:09:18.811919] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.166 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:43.426 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.426 01:09:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:43.426 01:09:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:43.426 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.426 01:09:18 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:43.426 01:09:18 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:43.426 01:09:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.426 software 00:06:43.426 00:06:43.426 real 0m0.228s 00:06:43.426 user 0m0.043s 00:06:43.426 sys 0m0.012s 00:06:43.426 01:09:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.426 01:09:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:43.426 ************************************ 00:06:43.426 END TEST accel_assign_opcode 00:06:43.426 ************************************ 00:06:43.426 01:09:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3935014 00:06:43.426 01:09:19 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 3935014 ']' 00:06:43.426 01:09:19 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 3935014 00:06:43.426 01:09:19 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:43.426 01:09:19 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.426 01:09:19 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935014 00:06:43.685 01:09:19 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:43.685 01:09:19 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:43.685 01:09:19 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935014' 00:06:43.685 killing process with pid 3935014 00:06:43.685 01:09:19 accel_rpc -- common/autotest_common.sh@965 -- # kill 3935014 00:06:43.685 01:09:19 accel_rpc -- common/autotest_common.sh@970 -- # wait 3935014 00:06:43.945 00:06:43.945 real 0m1.621s 00:06:43.945 user 0m1.635s 00:06:43.945 sys 0m0.484s 00:06:43.945 01:09:19 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.945 01:09:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.945 ************************************ 00:06:43.945 END TEST accel_rpc 00:06:43.945 ************************************ 00:06:43.945 01:09:19 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:43.945 01:09:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:43.945 01:09:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:43.945 01:09:19 -- common/autotest_common.sh@10 -- # set +x 00:06:43.945 ************************************ 00:06:43.945 START TEST app_cmdline 00:06:43.945 ************************************ 00:06:43.945 01:09:19 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:43.945 * Looking for test storage... 00:06:44.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:44.204 01:09:19 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:44.204 01:09:19 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3935365 00:06:44.204 01:09:19 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3935365 00:06:44.204 01:09:19 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:44.204 01:09:19 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 3935365 ']' 00:06:44.204 01:09:19 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.204 01:09:19 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.204 01:09:19 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.204 01:09:19 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.204 01:09:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.204 [2024-05-15 01:09:19.696559] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:06:44.205 [2024-05-15 01:09:19.696613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3935365 ] 00:06:44.205 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.205 [2024-05-15 01:09:19.764690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.205 [2024-05-15 01:09:19.837577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:45.143 { 00:06:45.143 "version": "SPDK v24.05-pre git sha1 aa13730db", 00:06:45.143 "fields": { 00:06:45.143 "major": 24, 00:06:45.143 "minor": 5, 00:06:45.143 "patch": 0, 00:06:45.143 "suffix": "-pre", 00:06:45.143 "commit": "aa13730db" 00:06:45.143 } 00:06:45.143 } 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:45.143 01:09:20 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:45.143 01:09:20 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:45.402 request: 00:06:45.402 { 00:06:45.402 "method": "env_dpdk_get_mem_stats", 00:06:45.402 "req_id": 1 00:06:45.402 } 00:06:45.402 Got JSON-RPC error response 00:06:45.402 response: 00:06:45.402 { 00:06:45.402 "code": -32601, 00:06:45.402 "message": "Method not found" 00:06:45.402 } 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:45.402 01:09:20 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3935365 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 3935365 ']' 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 3935365 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3935365 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3935365' 00:06:45.402 killing process with pid 3935365 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@965 -- # kill 3935365 00:06:45.402 01:09:20 app_cmdline -- common/autotest_common.sh@970 -- # wait 3935365 00:06:45.662 00:06:45.662 real 0m1.712s 00:06:45.662 user 0m1.992s 00:06:45.662 sys 0m0.480s 00:06:45.662 01:09:21 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.662 01:09:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.662 ************************************ 00:06:45.662 END TEST app_cmdline 00:06:45.662 ************************************ 00:06:45.662 01:09:21 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:45.662 01:09:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:45.662 01:09:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.662 01:09:21 -- common/autotest_common.sh@10 -- # set +x 00:06:45.662 ************************************ 00:06:45.662 START TEST version 00:06:45.662 ************************************ 00:06:45.662 01:09:21 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:45.922 * Looking for test storage... 00:06:45.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.922 01:09:21 version -- app/version.sh@17 -- # get_header_version major 00:06:45.922 01:09:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # cut -f2 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.922 01:09:21 version -- app/version.sh@17 -- # major=24 00:06:45.922 01:09:21 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.922 01:09:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # cut -f2 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.922 01:09:21 version -- app/version.sh@18 -- # minor=5 00:06:45.922 01:09:21 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.922 01:09:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # cut -f2 00:06:45.922 01:09:21 version -- app/version.sh@19 -- # patch=0 00:06:45.922 01:09:21 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.922 01:09:21 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # cut -f2 00:06:45.922 01:09:21 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.922 01:09:21 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.922 01:09:21 version -- app/version.sh@22 -- # version=24.5 00:06:45.922 01:09:21 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.922 01:09:21 version -- app/version.sh@28 -- # version=24.5rc0 00:06:45.922 01:09:21 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:45.922 01:09:21 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.922 01:09:21 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:45.922 01:09:21 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:45.922 00:06:45.922 real 0m0.182s 00:06:45.922 user 0m0.084s 00:06:45.922 sys 0m0.139s 00:06:45.922 01:09:21 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.922 01:09:21 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.922 ************************************ 00:06:45.922 END TEST version 00:06:45.922 ************************************ 00:06:45.922 01:09:21 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@194 -- # uname -s 00:06:45.922 01:09:21 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:45.922 01:09:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:45.922 01:09:21 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:45.922 01:09:21 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:45.922 01:09:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.922 01:09:21 -- common/autotest_common.sh@10 -- # set +x 00:06:45.922 01:09:21 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:06:45.922 01:09:21 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:06:45.922 01:09:21 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:45.922 01:09:21 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:45.922 01:09:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:45.922 01:09:21 -- common/autotest_common.sh@10 -- # set +x 00:06:46.183 ************************************ 00:06:46.183 START TEST nvmf_tcp 00:06:46.183 ************************************ 00:06:46.183 01:09:21 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:46.183 * Looking for test storage... 00:06:46.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.183 01:09:21 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.183 01:09:21 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.183 01:09:21 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.183 01:09:21 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.183 01:09:21 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.183 01:09:21 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.183 01:09:21 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:46.183 01:09:21 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:46.183 01:09:21 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:46.183 01:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:46.183 01:09:21 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:46.183 01:09:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:46.183 01:09:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.183 01:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:46.183 ************************************ 00:06:46.183 START TEST nvmf_example 00:06:46.183 ************************************ 00:06:46.183 01:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:46.183 * Looking for test storage... 00:06:46.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.184 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.444 01:09:21 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:06:46.445 01:09:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:53.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:53.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:53.020 Found net devices under 0000:af:00.0: cvl_0_0 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:53.020 Found net devices under 0000:af:00.1: cvl_0_1 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.020 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:06:53.021 00:06:53.021 --- 10.0.0.2 ping statistics --- 00:06:53.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.021 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:06:53.021 00:06:53.021 --- 10.0.0.1 ping statistics --- 00:06:53.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.021 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3939158 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3939158 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 3939158 ']' 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:53.021 01:09:28 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:53.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.850 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:53.850 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:53.850 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:53.850 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:53.850 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:54.110 01:09:29 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:54.110 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.325 Initializing NVMe Controllers 00:07:06.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:06.325 Initialization complete. Launching workers. 00:07:06.325 ======================================================== 00:07:06.325 Latency(us) 00:07:06.325 Device Information : IOPS MiB/s Average min max 00:07:06.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14074.40 54.98 4549.06 689.81 15735.12 00:07:06.325 ======================================================== 00:07:06.325 Total : 14074.40 54.98 4549.06 689.81 15735.12 00:07:06.325 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:06.325 01:09:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:06.325 rmmod nvme_tcp 00:07:06.325 rmmod nvme_fabrics 00:07:06.325 rmmod nvme_keyring 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3939158 ']' 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3939158 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 3939158 ']' 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 3939158 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3939158 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3939158' 00:07:06.325 killing process with pid 3939158 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 3939158 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 3939158 00:07:06.325 nvmf threads initialize successfully 00:07:06.325 bdev subsystem init successfully 00:07:06.325 created a nvmf target service 00:07:06.325 create targets's poll groups done 00:07:06.325 all subsystems of target started 00:07:06.325 nvmf target is running 00:07:06.325 all subsystems of target stopped 00:07:06.325 destroy targets's poll groups done 00:07:06.325 destroyed the nvmf target service 00:07:06.325 bdev subsystem finish successfully 00:07:06.325 nvmf threads destroy successfully 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:06.325 01:09:40 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.892 01:09:42 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:06.892 01:09:42 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:06.892 01:09:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:06.892 01:09:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.892 00:07:06.892 real 0m20.613s 00:07:06.892 user 0m45.841s 00:07:06.892 sys 0m7.274s 00:07:06.892 01:09:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:06.892 01:09:42 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:06.892 ************************************ 00:07:06.892 END TEST nvmf_example 00:07:06.892 ************************************ 00:07:06.892 01:09:42 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:06.892 01:09:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:06.892 01:09:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:06.892 01:09:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:06.892 ************************************ 00:07:06.892 START TEST nvmf_filesystem 00:07:06.892 ************************************ 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:06.892 * Looking for test storage... 00:07:06.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:06.892 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:07.155 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:07.156 #define SPDK_CONFIG_H 00:07:07.156 #define SPDK_CONFIG_APPS 1 00:07:07.156 #define SPDK_CONFIG_ARCH native 00:07:07.156 #undef SPDK_CONFIG_ASAN 00:07:07.156 #undef SPDK_CONFIG_AVAHI 00:07:07.156 #undef SPDK_CONFIG_CET 00:07:07.156 #define SPDK_CONFIG_COVERAGE 1 00:07:07.156 #define SPDK_CONFIG_CROSS_PREFIX 00:07:07.156 #undef SPDK_CONFIG_CRYPTO 00:07:07.156 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:07.156 #undef SPDK_CONFIG_CUSTOMOCF 00:07:07.156 #undef SPDK_CONFIG_DAOS 00:07:07.156 #define SPDK_CONFIG_DAOS_DIR 00:07:07.156 #define SPDK_CONFIG_DEBUG 1 00:07:07.156 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:07.156 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:07.156 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:07.156 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:07.156 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:07.156 #undef SPDK_CONFIG_DPDK_UADK 00:07:07.156 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:07.156 #define SPDK_CONFIG_EXAMPLES 1 00:07:07.156 #undef SPDK_CONFIG_FC 00:07:07.156 #define SPDK_CONFIG_FC_PATH 00:07:07.156 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:07.156 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:07.156 #undef SPDK_CONFIG_FUSE 00:07:07.156 #undef SPDK_CONFIG_FUZZER 00:07:07.156 #define SPDK_CONFIG_FUZZER_LIB 00:07:07.156 #undef SPDK_CONFIG_GOLANG 00:07:07.156 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:07.156 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:07.156 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:07.156 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:07.156 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:07.156 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:07.156 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:07.156 #define SPDK_CONFIG_IDXD 1 00:07:07.156 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:07.156 #undef SPDK_CONFIG_IPSEC_MB 00:07:07.156 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:07.156 #define SPDK_CONFIG_ISAL 1 00:07:07.156 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:07.156 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:07.156 #define SPDK_CONFIG_LIBDIR 00:07:07.156 #undef SPDK_CONFIG_LTO 00:07:07.156 #define SPDK_CONFIG_MAX_LCORES 00:07:07.156 #define SPDK_CONFIG_NVME_CUSE 1 00:07:07.156 #undef SPDK_CONFIG_OCF 00:07:07.156 #define SPDK_CONFIG_OCF_PATH 00:07:07.156 #define SPDK_CONFIG_OPENSSL_PATH 00:07:07.156 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:07.156 #define SPDK_CONFIG_PGO_DIR 00:07:07.156 #undef SPDK_CONFIG_PGO_USE 00:07:07.156 #define SPDK_CONFIG_PREFIX /usr/local 00:07:07.156 #undef SPDK_CONFIG_RAID5F 00:07:07.156 #undef SPDK_CONFIG_RBD 00:07:07.156 #define SPDK_CONFIG_RDMA 1 00:07:07.156 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:07.156 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:07.156 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:07.156 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:07.156 #define SPDK_CONFIG_SHARED 1 00:07:07.156 #undef SPDK_CONFIG_SMA 00:07:07.156 #define SPDK_CONFIG_TESTS 1 00:07:07.156 #undef SPDK_CONFIG_TSAN 00:07:07.156 #define SPDK_CONFIG_UBLK 1 00:07:07.156 #define SPDK_CONFIG_UBSAN 1 00:07:07.156 #undef SPDK_CONFIG_UNIT_TESTS 00:07:07.156 #undef SPDK_CONFIG_URING 00:07:07.156 #define SPDK_CONFIG_URING_PATH 00:07:07.156 #undef SPDK_CONFIG_URING_ZNS 00:07:07.156 #undef SPDK_CONFIG_USDT 00:07:07.156 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:07.156 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:07.156 #define SPDK_CONFIG_VFIO_USER 1 00:07:07.156 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:07.156 #define SPDK_CONFIG_VHOST 1 00:07:07.156 #define SPDK_CONFIG_VIRTIO 1 00:07:07.156 #undef SPDK_CONFIG_VTUNE 00:07:07.156 #define SPDK_CONFIG_VTUNE_DIR 00:07:07.156 #define SPDK_CONFIG_WERROR 1 00:07:07.156 #define SPDK_CONFIG_WPDK_DIR 00:07:07.156 #undef SPDK_CONFIG_XNVME 00:07:07.156 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:07.156 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:07.157 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j112 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 3941669 ]] 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 3941669 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.1RivXG 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.1RivXG/tests/target /tmp/spdk.1RivXG 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=972992512 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311437312 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=52255322112 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61742292992 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9486970880 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30867771392 00:07:07.158 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871146496 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12339077120 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12348461056 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=9383936 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30869848064 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30871146496 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1298432 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6174224384 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6174228480 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:07:07.159 * Looking for test storage... 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=52255322112 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=11701563392 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.159 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.160 01:09:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.313 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:15.314 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:15.314 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:15.314 Found net devices under 0000:af:00.0: cvl_0_0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:15.314 Found net devices under 0000:af:00.1: cvl_0_1 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:07:15.314 00:07:15.314 --- 10.0.0.2 ping statistics --- 00:07:15.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.314 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:07:15.314 00:07:15.314 --- 10.0.0.1 ping statistics --- 00:07:15.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.314 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:15.314 ************************************ 00:07:15.314 START TEST nvmf_filesystem_no_in_capsule 00:07:15.314 ************************************ 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3945055 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3945055 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3945055 ']' 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.314 01:09:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.314 [2024-05-15 01:09:50.002556] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:07:15.314 [2024-05-15 01:09:50.002608] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.314 [2024-05-15 01:09:50.080387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:15.315 [2024-05-15 01:09:50.159524] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.315 [2024-05-15 01:09:50.159567] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.315 [2024-05-15 01:09:50.159576] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.315 [2024-05-15 01:09:50.159584] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.315 [2024-05-15 01:09:50.159591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.315 [2024-05-15 01:09:50.159645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.315 [2024-05-15 01:09:50.159739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.315 [2024-05-15 01:09:50.159824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.315 [2024-05-15 01:09:50.159825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 [2024-05-15 01:09:50.856146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 Malloc1 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.315 01:09:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.315 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.315 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.315 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.315 [2024-05-15 01:09:51.004222] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:15.315 [2024-05-15 01:09:51.004499] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:15.574 { 00:07:15.574 "name": "Malloc1", 00:07:15.574 "aliases": [ 00:07:15.574 "46c82135-82a2-459a-8382-987a95ca1f58" 00:07:15.574 ], 00:07:15.574 "product_name": "Malloc disk", 00:07:15.574 "block_size": 512, 00:07:15.574 "num_blocks": 1048576, 00:07:15.574 "uuid": "46c82135-82a2-459a-8382-987a95ca1f58", 00:07:15.574 "assigned_rate_limits": { 00:07:15.574 "rw_ios_per_sec": 0, 00:07:15.574 "rw_mbytes_per_sec": 0, 00:07:15.574 "r_mbytes_per_sec": 0, 00:07:15.574 "w_mbytes_per_sec": 0 00:07:15.574 }, 00:07:15.574 "claimed": true, 00:07:15.574 "claim_type": "exclusive_write", 00:07:15.574 "zoned": false, 00:07:15.574 "supported_io_types": { 00:07:15.574 "read": true, 00:07:15.574 "write": true, 00:07:15.574 "unmap": true, 00:07:15.574 "write_zeroes": true, 00:07:15.574 "flush": true, 00:07:15.574 "reset": true, 00:07:15.574 "compare": false, 00:07:15.574 "compare_and_write": false, 00:07:15.574 "abort": true, 00:07:15.574 "nvme_admin": false, 00:07:15.574 "nvme_io": false 00:07:15.574 }, 00:07:15.574 "memory_domains": [ 00:07:15.574 { 00:07:15.574 "dma_device_id": "system", 00:07:15.574 "dma_device_type": 1 00:07:15.574 }, 00:07:15.574 { 00:07:15.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.574 "dma_device_type": 2 00:07:15.574 } 00:07:15.574 ], 00:07:15.574 "driver_specific": {} 00:07:15.574 } 00:07:15.574 ]' 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:15.574 01:09:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.952 01:09:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.952 01:09:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:16.952 01:09:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.952 01:09:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:16.952 01:09:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:18.858 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:19.117 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:19.376 01:09:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:20.314 01:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:20.314 01:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:20.314 01:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:20.314 01:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.314 01:09:55 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:20.573 ************************************ 00:07:20.573 START TEST filesystem_ext4 00:07:20.573 ************************************ 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:20.573 01:09:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:20.573 mke2fs 1.46.5 (30-Dec-2021) 00:07:20.573 Discarding device blocks: 0/522240 done 00:07:20.573 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:20.573 Filesystem UUID: 282c76cf-1c9e-4535-92b4-0b4e061270f5 00:07:20.573 Superblock backups stored on blocks: 00:07:20.573 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:20.573 00:07:20.573 Allocating group tables: 0/64 done 00:07:20.573 Writing inode tables: 0/64 done 00:07:20.832 Creating journal (8192 blocks): done 00:07:21.660 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:07:21.660 00:07:21.660 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:21.660 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3945055 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:21.919 00:07:21.919 real 0m1.541s 00:07:21.919 user 0m0.028s 00:07:21.919 sys 0m0.079s 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:21.919 ************************************ 00:07:21.919 END TEST filesystem_ext4 00:07:21.919 ************************************ 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.919 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.179 ************************************ 00:07:22.179 START TEST filesystem_btrfs 00:07:22.179 ************************************ 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:22.179 btrfs-progs v6.6.2 00:07:22.179 See https://btrfs.readthedocs.io for more information. 00:07:22.179 00:07:22.179 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:22.179 NOTE: several default settings have changed in version 5.15, please make sure 00:07:22.179 this does not affect your deployments: 00:07:22.179 - DUP for metadata (-m dup) 00:07:22.179 - enabled no-holes (-O no-holes) 00:07:22.179 - enabled free-space-tree (-R free-space-tree) 00:07:22.179 00:07:22.179 Label: (null) 00:07:22.179 UUID: 6af15395-b899-4337-a99e-240dbd24512d 00:07:22.179 Node size: 16384 00:07:22.179 Sector size: 4096 00:07:22.179 Filesystem size: 510.00MiB 00:07:22.179 Block group profiles: 00:07:22.179 Data: single 8.00MiB 00:07:22.179 Metadata: DUP 32.00MiB 00:07:22.179 System: DUP 8.00MiB 00:07:22.179 SSD detected: yes 00:07:22.179 Zoned device: no 00:07:22.179 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:22.179 Runtime features: free-space-tree 00:07:22.179 Checksum: crc32c 00:07:22.179 Number of devices: 1 00:07:22.179 Devices: 00:07:22.179 ID SIZE PATH 00:07:22.179 1 510.00MiB /dev/nvme0n1p1 00:07:22.179 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:22.179 01:09:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:22.439 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:22.439 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3945055 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:22.699 00:07:22.699 real 0m0.532s 00:07:22.699 user 0m0.034s 00:07:22.699 sys 0m0.134s 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:22.699 ************************************ 00:07:22.699 END TEST filesystem_btrfs 00:07:22.699 ************************************ 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:22.699 ************************************ 00:07:22.699 START TEST filesystem_xfs 00:07:22.699 ************************************ 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:22.699 01:09:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:22.699 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:22.699 = sectsz=512 attr=2, projid32bit=1 00:07:22.699 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:22.699 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:22.699 data = bsize=4096 blocks=130560, imaxpct=25 00:07:22.699 = sunit=0 swidth=0 blks 00:07:22.699 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:22.699 log =internal log bsize=4096 blocks=16384, version=2 00:07:22.699 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:22.699 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:24.078 Discarding blocks...Done. 00:07:24.078 01:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:24.078 01:09:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3945055 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.617 00:07:26.617 real 0m3.568s 00:07:26.617 user 0m0.025s 00:07:26.617 sys 0m0.090s 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:26.617 ************************************ 00:07:26.617 END TEST filesystem_xfs 00:07:26.617 ************************************ 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:26.617 01:10:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:26.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3945055 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3945055 ']' 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3945055 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3945055 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3945055' 00:07:26.617 killing process with pid 3945055 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 3945055 00:07:26.617 [2024-05-15 01:10:02.189878] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:26.617 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 3945055 00:07:26.876 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:26.876 00:07:26.876 real 0m12.616s 00:07:26.876 user 0m49.041s 00:07:26.876 sys 0m1.840s 00:07:26.876 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.876 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.876 ************************************ 00:07:26.876 END TEST nvmf_filesystem_no_in_capsule 00:07:26.877 ************************************ 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 ************************************ 00:07:27.136 START TEST nvmf_filesystem_in_capsule 00:07:27.136 ************************************ 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3947405 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3947405 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 3947405 ']' 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:27.136 01:10:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.136 [2024-05-15 01:10:02.711830] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:07:27.136 [2024-05-15 01:10:02.711873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.136 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.136 [2024-05-15 01:10:02.785687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:27.395 [2024-05-15 01:10:02.861320] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.395 [2024-05-15 01:10:02.861356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.395 [2024-05-15 01:10:02.861365] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.395 [2024-05-15 01:10:02.861374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.396 [2024-05-15 01:10:02.861382] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.396 [2024-05-15 01:10:02.861433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.396 [2024-05-15 01:10:02.861527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.396 [2024-05-15 01:10:02.861614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:27.396 [2024-05-15 01:10:02.861616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:27.963 [2024-05-15 01:10:03.571931] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:27.963 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.223 Malloc1 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.223 [2024-05-15 01:10:03.724272] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:28.223 [2024-05-15 01:10:03.724537] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:07:28.223 { 00:07:28.223 "name": "Malloc1", 00:07:28.223 "aliases": [ 00:07:28.223 "416fa8e0-2cc3-4093-bdd3-28823117997a" 00:07:28.223 ], 00:07:28.223 "product_name": "Malloc disk", 00:07:28.223 "block_size": 512, 00:07:28.223 "num_blocks": 1048576, 00:07:28.223 "uuid": "416fa8e0-2cc3-4093-bdd3-28823117997a", 00:07:28.223 "assigned_rate_limits": { 00:07:28.223 "rw_ios_per_sec": 0, 00:07:28.223 "rw_mbytes_per_sec": 0, 00:07:28.223 "r_mbytes_per_sec": 0, 00:07:28.223 "w_mbytes_per_sec": 0 00:07:28.223 }, 00:07:28.223 "claimed": true, 00:07:28.223 "claim_type": "exclusive_write", 00:07:28.223 "zoned": false, 00:07:28.223 "supported_io_types": { 00:07:28.223 "read": true, 00:07:28.223 "write": true, 00:07:28.223 "unmap": true, 00:07:28.223 "write_zeroes": true, 00:07:28.223 "flush": true, 00:07:28.223 "reset": true, 00:07:28.223 "compare": false, 00:07:28.223 "compare_and_write": false, 00:07:28.223 "abort": true, 00:07:28.223 "nvme_admin": false, 00:07:28.223 "nvme_io": false 00:07:28.223 }, 00:07:28.223 "memory_domains": [ 00:07:28.223 { 00:07:28.223 "dma_device_id": "system", 00:07:28.223 "dma_device_type": 1 00:07:28.223 }, 00:07:28.223 { 00:07:28.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.223 "dma_device_type": 2 00:07:28.223 } 00:07:28.223 ], 00:07:28.223 "driver_specific": {} 00:07:28.223 } 00:07:28.223 ]' 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:28.223 01:10:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:29.599 01:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:29.599 01:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:07:29.599 01:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:29.599 01:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:29.599 01:10:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:32.199 01:10:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:33.137 ************************************ 00:07:33.137 START TEST filesystem_in_capsule_ext4 00:07:33.137 ************************************ 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:07:33.137 01:10:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:33.137 mke2fs 1.46.5 (30-Dec-2021) 00:07:33.396 Discarding device blocks: 0/522240 done 00:07:33.396 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:33.396 Filesystem UUID: 82e9f311-0add-4f82-9b94-02c9a63c41e1 00:07:33.396 Superblock backups stored on blocks: 00:07:33.396 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:33.396 00:07:33.396 Allocating group tables: 0/64 done 00:07:33.396 Writing inode tables: 0/64 done 00:07:36.686 Creating journal (8192 blocks): done 00:07:37.254 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:07:37.254 00:07:37.254 01:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:07:37.254 01:10:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:38.191 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3947405 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:38.192 00:07:38.192 real 0m4.927s 00:07:38.192 user 0m0.030s 00:07:38.192 sys 0m0.082s 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:38.192 ************************************ 00:07:38.192 END TEST filesystem_in_capsule_ext4 00:07:38.192 ************************************ 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:38.192 ************************************ 00:07:38.192 START TEST filesystem_in_capsule_btrfs 00:07:38.192 ************************************ 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:07:38.192 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:38.452 btrfs-progs v6.6.2 00:07:38.452 See https://btrfs.readthedocs.io for more information. 00:07:38.452 00:07:38.452 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:38.452 NOTE: several default settings have changed in version 5.15, please make sure 00:07:38.452 this does not affect your deployments: 00:07:38.452 - DUP for metadata (-m dup) 00:07:38.452 - enabled no-holes (-O no-holes) 00:07:38.452 - enabled free-space-tree (-R free-space-tree) 00:07:38.452 00:07:38.452 Label: (null) 00:07:38.452 UUID: 2e3b4bbd-09c4-4cab-b760-a766c206c52c 00:07:38.452 Node size: 16384 00:07:38.452 Sector size: 4096 00:07:38.452 Filesystem size: 510.00MiB 00:07:38.452 Block group profiles: 00:07:38.452 Data: single 8.00MiB 00:07:38.452 Metadata: DUP 32.00MiB 00:07:38.452 System: DUP 8.00MiB 00:07:38.452 SSD detected: yes 00:07:38.452 Zoned device: no 00:07:38.452 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:38.452 Runtime features: free-space-tree 00:07:38.452 Checksum: crc32c 00:07:38.452 Number of devices: 1 00:07:38.452 Devices: 00:07:38.452 ID SIZE PATH 00:07:38.452 1 510.00MiB /dev/nvme0n1p1 00:07:38.452 00:07:38.452 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:07:38.452 01:10:13 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3947405 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:39.391 00:07:39.391 real 0m1.192s 00:07:39.391 user 0m0.036s 00:07:39.391 sys 0m0.135s 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:39.391 01:10:14 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:39.391 ************************************ 00:07:39.391 END TEST filesystem_in_capsule_btrfs 00:07:39.391 ************************************ 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:39.391 ************************************ 00:07:39.391 START TEST filesystem_in_capsule_xfs 00:07:39.391 ************************************ 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:07:39.391 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:39.651 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:39.651 = sectsz=512 attr=2, projid32bit=1 00:07:39.651 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:39.651 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:39.651 data = bsize=4096 blocks=130560, imaxpct=25 00:07:39.651 = sunit=0 swidth=0 blks 00:07:39.651 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:39.651 log =internal log bsize=4096 blocks=16384, version=2 00:07:39.651 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:39.651 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:40.589 Discarding blocks...Done. 00:07:40.589 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:07:40.589 01:10:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.497 01:10:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3947405 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.497 00:07:42.497 real 0m3.006s 00:07:42.497 user 0m0.028s 00:07:42.497 sys 0m0.086s 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:42.497 ************************************ 00:07:42.497 END TEST filesystem_in_capsule_xfs 00:07:42.497 ************************************ 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:42.497 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:42.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3947405 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 3947405 ']' 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 3947405 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3947405 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3947405' 00:07:42.757 killing process with pid 3947405 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 3947405 00:07:42.757 [2024-05-15 01:10:18.409823] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:42.757 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 3947405 00:07:43.326 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:43.327 00:07:43.327 real 0m16.123s 00:07:43.327 user 1m2.877s 00:07:43.327 sys 0m2.026s 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.327 ************************************ 00:07:43.327 END TEST nvmf_filesystem_in_capsule 00:07:43.327 ************************************ 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:43.327 rmmod nvme_tcp 00:07:43.327 rmmod nvme_fabrics 00:07:43.327 rmmod nvme_keyring 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.327 01:10:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.864 01:10:20 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:45.864 00:07:45.864 real 0m38.491s 00:07:45.864 user 1m54.040s 00:07:45.864 sys 0m9.537s 00:07:45.864 01:10:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:45.864 01:10:20 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.864 ************************************ 00:07:45.864 END TEST nvmf_filesystem 00:07:45.864 ************************************ 00:07:45.864 01:10:21 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.864 01:10:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:45.864 01:10:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:45.864 01:10:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.864 ************************************ 00:07:45.864 START TEST nvmf_target_discovery 00:07:45.864 ************************************ 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:45.864 * Looking for test storage... 00:07:45.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.864 01:10:21 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:52.427 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:52.427 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:52.427 Found net devices under 0000:af:00.0: cvl_0_0 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:52.427 Found net devices under 0000:af:00.1: cvl_0_1 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:52.427 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:52.428 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.428 01:10:26 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:52.428 00:07:52.428 --- 10.0.0.2 ping statistics --- 00:07:52.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.428 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:07:52.428 00:07:52.428 --- 10.0.0.1 ping statistics --- 00:07:52.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.428 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3953752 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3953752 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 3953752 ']' 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 01:10:27 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:52.428 [2024-05-15 01:10:27.297448] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:07:52.428 [2024-05-15 01:10:27.297495] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.428 [2024-05-15 01:10:27.372336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.428 [2024-05-15 01:10:27.446470] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.428 [2024-05-15 01:10:27.446505] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.428 [2024-05-15 01:10:27.446515] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.428 [2024-05-15 01:10:27.446524] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.428 [2024-05-15 01:10:27.446531] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.428 [2024-05-15 01:10:27.446577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.428 [2024-05-15 01:10:27.446672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.428 [2024-05-15 01:10:27.446737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.428 [2024-05-15 01:10:27.446739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.428 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:52.428 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:07:52.428 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:52.428 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.428 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.687 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.687 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.687 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.687 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 [2024-05-15 01:10:28.155013] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 Null1 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 [2024-05-15 01:10:28.207139] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:52.688 [2024-05-15 01:10:28.207356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 Null2 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 Null3 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 Null4 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.688 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:07:52.947 00:07:52.947 Discovery Log Number of Records 6, Generation counter 6 00:07:52.947 =====Discovery Log Entry 0====== 00:07:52.947 trtype: tcp 00:07:52.947 adrfam: ipv4 00:07:52.947 subtype: current discovery subsystem 00:07:52.947 treq: not required 00:07:52.947 portid: 0 00:07:52.947 trsvcid: 4420 00:07:52.947 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.947 traddr: 10.0.0.2 00:07:52.948 eflags: explicit discovery connections, duplicate discovery information 00:07:52.948 sectype: none 00:07:52.948 =====Discovery Log Entry 1====== 00:07:52.948 trtype: tcp 00:07:52.948 adrfam: ipv4 00:07:52.948 subtype: nvme subsystem 00:07:52.948 treq: not required 00:07:52.948 portid: 0 00:07:52.948 trsvcid: 4420 00:07:52.948 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:52.948 traddr: 10.0.0.2 00:07:52.948 eflags: none 00:07:52.948 sectype: none 00:07:52.948 =====Discovery Log Entry 2====== 00:07:52.948 trtype: tcp 00:07:52.948 adrfam: ipv4 00:07:52.948 subtype: nvme subsystem 00:07:52.948 treq: not required 00:07:52.948 portid: 0 00:07:52.948 trsvcid: 4420 00:07:52.948 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:52.948 traddr: 10.0.0.2 00:07:52.948 eflags: none 00:07:52.948 sectype: none 00:07:52.948 =====Discovery Log Entry 3====== 00:07:52.948 trtype: tcp 00:07:52.948 adrfam: ipv4 00:07:52.948 subtype: nvme subsystem 00:07:52.948 treq: not required 00:07:52.948 portid: 0 00:07:52.948 trsvcid: 4420 00:07:52.948 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:52.948 traddr: 10.0.0.2 00:07:52.948 eflags: none 00:07:52.948 sectype: none 00:07:52.948 =====Discovery Log Entry 4====== 00:07:52.948 trtype: tcp 00:07:52.948 adrfam: ipv4 00:07:52.948 subtype: nvme subsystem 00:07:52.948 treq: not required 00:07:52.948 portid: 0 00:07:52.948 trsvcid: 4420 00:07:52.948 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:52.948 traddr: 10.0.0.2 00:07:52.948 eflags: none 00:07:52.948 sectype: none 00:07:52.948 =====Discovery Log Entry 5====== 00:07:52.948 trtype: tcp 00:07:52.948 adrfam: ipv4 00:07:52.948 subtype: discovery subsystem referral 00:07:52.948 treq: not required 00:07:52.948 portid: 0 00:07:52.948 trsvcid: 4430 00:07:52.948 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.948 traddr: 10.0.0.2 00:07:52.948 eflags: none 00:07:52.948 sectype: none 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:52.948 Perform nvmf subsystem discovery via RPC 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 [ 00:07:52.948 { 00:07:52.948 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:52.948 "subtype": "Discovery", 00:07:52.948 "listen_addresses": [ 00:07:52.948 { 00:07:52.948 "trtype": "TCP", 00:07:52.948 "adrfam": "IPv4", 00:07:52.948 "traddr": "10.0.0.2", 00:07:52.948 "trsvcid": "4420" 00:07:52.948 } 00:07:52.948 ], 00:07:52.948 "allow_any_host": true, 00:07:52.948 "hosts": [] 00:07:52.948 }, 00:07:52.948 { 00:07:52.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.948 "subtype": "NVMe", 00:07:52.948 "listen_addresses": [ 00:07:52.948 { 00:07:52.948 "trtype": "TCP", 00:07:52.948 "adrfam": "IPv4", 00:07:52.948 "traddr": "10.0.0.2", 00:07:52.948 "trsvcid": "4420" 00:07:52.948 } 00:07:52.948 ], 00:07:52.948 "allow_any_host": true, 00:07:52.948 "hosts": [], 00:07:52.948 "serial_number": "SPDK00000000000001", 00:07:52.948 "model_number": "SPDK bdev Controller", 00:07:52.948 "max_namespaces": 32, 00:07:52.948 "min_cntlid": 1, 00:07:52.948 "max_cntlid": 65519, 00:07:52.948 "namespaces": [ 00:07:52.948 { 00:07:52.948 "nsid": 1, 00:07:52.948 "bdev_name": "Null1", 00:07:52.948 "name": "Null1", 00:07:52.948 "nguid": "B26CA60C6C6A480F97651FB59A2CFEDC", 00:07:52.948 "uuid": "b26ca60c-6c6a-480f-9765-1fb59a2cfedc" 00:07:52.948 } 00:07:52.948 ] 00:07:52.948 }, 00:07:52.948 { 00:07:52.948 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:52.948 "subtype": "NVMe", 00:07:52.948 "listen_addresses": [ 00:07:52.948 { 00:07:52.948 "trtype": "TCP", 00:07:52.948 "adrfam": "IPv4", 00:07:52.948 "traddr": "10.0.0.2", 00:07:52.948 "trsvcid": "4420" 00:07:52.948 } 00:07:52.948 ], 00:07:52.948 "allow_any_host": true, 00:07:52.948 "hosts": [], 00:07:52.948 "serial_number": "SPDK00000000000002", 00:07:52.948 "model_number": "SPDK bdev Controller", 00:07:52.948 "max_namespaces": 32, 00:07:52.948 "min_cntlid": 1, 00:07:52.948 "max_cntlid": 65519, 00:07:52.948 "namespaces": [ 00:07:52.948 { 00:07:52.948 "nsid": 1, 00:07:52.948 "bdev_name": "Null2", 00:07:52.948 "name": "Null2", 00:07:52.948 "nguid": "B26FFF0E9E484BD6B2AB195DA5D2175D", 00:07:52.948 "uuid": "b26fff0e-9e48-4bd6-b2ab-195da5d2175d" 00:07:52.948 } 00:07:52.948 ] 00:07:52.948 }, 00:07:52.948 { 00:07:52.948 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:52.948 "subtype": "NVMe", 00:07:52.948 "listen_addresses": [ 00:07:52.948 { 00:07:52.948 "trtype": "TCP", 00:07:52.948 "adrfam": "IPv4", 00:07:52.948 "traddr": "10.0.0.2", 00:07:52.948 "trsvcid": "4420" 00:07:52.948 } 00:07:52.948 ], 00:07:52.948 "allow_any_host": true, 00:07:52.948 "hosts": [], 00:07:52.948 "serial_number": "SPDK00000000000003", 00:07:52.948 "model_number": "SPDK bdev Controller", 00:07:52.948 "max_namespaces": 32, 00:07:52.948 "min_cntlid": 1, 00:07:52.948 "max_cntlid": 65519, 00:07:52.948 "namespaces": [ 00:07:52.948 { 00:07:52.948 "nsid": 1, 00:07:52.948 "bdev_name": "Null3", 00:07:52.948 "name": "Null3", 00:07:52.948 "nguid": "83C889A569CA4D81B0EE3B44962DC9DF", 00:07:52.948 "uuid": "83c889a5-69ca-4d81-b0ee-3b44962dc9df" 00:07:52.948 } 00:07:52.948 ] 00:07:52.948 }, 00:07:52.948 { 00:07:52.948 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:52.948 "subtype": "NVMe", 00:07:52.948 "listen_addresses": [ 00:07:52.948 { 00:07:52.948 "trtype": "TCP", 00:07:52.948 "adrfam": "IPv4", 00:07:52.948 "traddr": "10.0.0.2", 00:07:52.948 "trsvcid": "4420" 00:07:52.948 } 00:07:52.948 ], 00:07:52.948 "allow_any_host": true, 00:07:52.948 "hosts": [], 00:07:52.948 "serial_number": "SPDK00000000000004", 00:07:52.948 "model_number": "SPDK bdev Controller", 00:07:52.948 "max_namespaces": 32, 00:07:52.948 "min_cntlid": 1, 00:07:52.948 "max_cntlid": 65519, 00:07:52.948 "namespaces": [ 00:07:52.948 { 00:07:52.948 "nsid": 1, 00:07:52.948 "bdev_name": "Null4", 00:07:52.948 "name": "Null4", 00:07:52.948 "nguid": "C5C23797A8AF468CACE685CE19DCBBDC", 00:07:52.948 "uuid": "c5c23797-a8af-468c-ace6-85ce19dcbbdc" 00:07:52.948 } 00:07:52.948 ] 00:07:52.948 } 00:07:52.948 ] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.948 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.949 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.949 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:52.949 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:52.949 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.949 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.949 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:53.209 rmmod nvme_tcp 00:07:53.209 rmmod nvme_fabrics 00:07:53.209 rmmod nvme_keyring 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3953752 ']' 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3953752 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 3953752 ']' 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 3953752 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3953752 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3953752' 00:07:53.209 killing process with pid 3953752 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 3953752 00:07:53.209 [2024-05-15 01:10:28.803701] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:53.209 01:10:28 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 3953752 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.468 01:10:29 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.417 01:10:31 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.417 00:07:55.417 real 0m10.044s 00:07:55.417 user 0m7.886s 00:07:55.417 sys 0m4.957s 00:07:55.417 01:10:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.417 01:10:31 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.417 ************************************ 00:07:55.417 END TEST nvmf_target_discovery 00:07:55.417 ************************************ 00:07:55.678 01:10:31 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:55.678 01:10:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:55.678 01:10:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.678 01:10:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.678 ************************************ 00:07:55.678 START TEST nvmf_referrals 00:07:55.678 ************************************ 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:55.678 * Looking for test storage... 00:07:55.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.678 01:10:31 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.252 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.252 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.252 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.252 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.252 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:02.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:02.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:02.253 Found net devices under 0000:af:00.0: cvl_0_0 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:02.253 Found net devices under 0000:af:00.1: cvl_0_1 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.253 01:10:37 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:08:02.513 00:08:02.513 --- 10.0.0.2 ping statistics --- 00:08:02.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.513 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:08:02.513 00:08:02.513 --- 10.0.0.1 ping statistics --- 00:08:02.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.513 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3957734 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3957734 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 3957734 ']' 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:02.513 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:02.513 [2024-05-15 01:10:38.153085] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:08:02.513 [2024-05-15 01:10:38.153139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.513 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.773 [2024-05-15 01:10:38.229108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.773 [2024-05-15 01:10:38.304705] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.773 [2024-05-15 01:10:38.304745] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.773 [2024-05-15 01:10:38.304754] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.773 [2024-05-15 01:10:38.304762] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.773 [2024-05-15 01:10:38.304768] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.773 [2024-05-15 01:10:38.304821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.773 [2024-05-15 01:10:38.304916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.773 [2024-05-15 01:10:38.305000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.773 [2024-05-15 01:10:38.305002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.343 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.343 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:08:03.343 01:10:38 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.343 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.343 01:10:38 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.343 [2024-05-15 01:10:39.012078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.343 [2024-05-15 01:10:39.028078] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:03.343 [2024-05-15 01:10:39.028323] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.343 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.602 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.603 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.863 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.122 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:04.123 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:04.123 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:04.123 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.123 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:04.123 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.123 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.382 01:10:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.382 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.642 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:04.902 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:04.902 rmmod nvme_tcp 00:08:04.902 rmmod nvme_fabrics 00:08:05.161 rmmod nvme_keyring 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3957734 ']' 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3957734 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 3957734 ']' 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 3957734 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3957734 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3957734' 00:08:05.161 killing process with pid 3957734 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 3957734 00:08:05.161 [2024-05-15 01:10:40.684124] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:05.161 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 3957734 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:05.421 01:10:40 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.326 01:10:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:07.326 00:08:07.326 real 0m11.791s 00:08:07.326 user 0m12.826s 00:08:07.326 sys 0m5.986s 00:08:07.326 01:10:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:07.326 01:10:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:07.326 ************************************ 00:08:07.326 END TEST nvmf_referrals 00:08:07.326 ************************************ 00:08:07.326 01:10:43 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:07.326 01:10:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:07.326 01:10:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:07.586 01:10:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:07.586 ************************************ 00:08:07.586 START TEST nvmf_connect_disconnect 00:08:07.586 ************************************ 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:07.586 * Looking for test storage... 00:08:07.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.586 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:07.587 01:10:43 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:15.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:15.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.715 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:15.716 Found net devices under 0000:af:00.0: cvl_0_0 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:15.716 Found net devices under 0000:af:00.1: cvl_0_1 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.716 01:10:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:15.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:08:15.716 00:08:15.716 --- 10.0.0.2 ping statistics --- 00:08:15.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.716 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:08:15.716 00:08:15.716 --- 10.0.0.1 ping statistics --- 00:08:15.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.716 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3962065 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3962065 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 3962065 ']' 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:15.716 01:10:50 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.716 [2024-05-15 01:10:50.373209] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:08:15.716 [2024-05-15 01:10:50.373256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.716 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.716 [2024-05-15 01:10:50.446951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:15.716 [2024-05-15 01:10:50.516679] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.716 [2024-05-15 01:10:50.516735] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.716 [2024-05-15 01:10:50.516745] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.716 [2024-05-15 01:10:50.516753] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.716 [2024-05-15 01:10:50.516760] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.716 [2024-05-15 01:10:50.516813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:15.716 [2024-05-15 01:10:50.516914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:15.716 [2024-05-15 01:10:50.516977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:15.716 [2024-05-15 01:10:50.516979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.716 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:15.716 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.717 [2024-05-15 01:10:51.226111] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:15.717 [2024-05-15 01:10:51.280498] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:15.717 [2024-05-15 01:10:51.280772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:15.717 01:10:51 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:19.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.120 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:33.120 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:33.120 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:33.120 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:33.120 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:33.121 rmmod nvme_tcp 00:08:33.121 rmmod nvme_fabrics 00:08:33.121 rmmod nvme_keyring 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3962065 ']' 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3962065 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 3962065 ']' 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 3962065 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:33.121 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3962065 00:08:33.380 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:33.380 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:33.380 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3962065' 00:08:33.380 killing process with pid 3962065 00:08:33.380 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 3962065 00:08:33.380 [2024-05-15 01:11:08.828316] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:33.380 01:11:08 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 3962065 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:33.380 01:11:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.915 01:11:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:35.915 00:08:35.915 real 0m28.068s 00:08:35.915 user 1m14.881s 00:08:35.915 sys 0m7.493s 00:08:35.915 01:11:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:35.915 01:11:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:35.915 ************************************ 00:08:35.915 END TEST nvmf_connect_disconnect 00:08:35.915 ************************************ 00:08:35.915 01:11:11 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:35.915 01:11:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:35.916 01:11:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:35.916 01:11:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 ************************************ 00:08:35.916 START TEST nvmf_multitarget 00:08:35.916 ************************************ 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:35.916 * Looking for test storage... 00:08:35.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:35.916 01:11:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:42.479 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:42.479 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:42.479 Found net devices under 0000:af:00.0: cvl_0_0 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:42.479 Found net devices under 0000:af:00.1: cvl_0_1 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:42.479 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.480 01:11:17 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.480 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.480 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.480 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:42.480 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.480 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:42.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:08:42.740 00:08:42.740 --- 10.0.0.2 ping statistics --- 00:08:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.740 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:08:42.740 00:08:42.740 --- 10.0.0.1 ping statistics --- 00:08:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.740 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3969066 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3969066 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 3969066 ']' 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:42.740 01:11:18 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:42.740 [2024-05-15 01:11:18.308536] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:08:42.740 [2024-05-15 01:11:18.308585] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.740 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.740 [2024-05-15 01:11:18.384449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:43.035 [2024-05-15 01:11:18.459508] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.035 [2024-05-15 01:11:18.459543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.035 [2024-05-15 01:11:18.459552] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.035 [2024-05-15 01:11:18.459560] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.035 [2024-05-15 01:11:18.459583] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.035 [2024-05-15 01:11:18.459622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.035 [2024-05-15 01:11:18.459713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:43.035 [2024-05-15 01:11:18.459799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:43.035 [2024-05-15 01:11:18.459801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:43.628 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:43.887 "nvmf_tgt_1" 00:08:43.887 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:43.887 "nvmf_tgt_2" 00:08:43.887 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:43.887 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:44.146 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:44.146 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:44.146 true 00:08:44.146 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:44.146 true 00:08:44.146 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:44.146 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:44.405 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:44.405 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:44.405 01:11:19 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:44.405 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.405 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:44.405 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.406 rmmod nvme_tcp 00:08:44.406 rmmod nvme_fabrics 00:08:44.406 rmmod nvme_keyring 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3969066 ']' 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3969066 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 3969066 ']' 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 3969066 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.406 01:11:19 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3969066 00:08:44.406 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.406 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.406 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3969066' 00:08:44.406 killing process with pid 3969066 00:08:44.406 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 3969066 00:08:44.406 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 3969066 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.665 01:11:20 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.203 01:11:22 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.203 00:08:47.203 real 0m11.066s 00:08:47.203 user 0m9.550s 00:08:47.203 sys 0m5.767s 00:08:47.203 01:11:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.203 01:11:22 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:47.203 ************************************ 00:08:47.203 END TEST nvmf_multitarget 00:08:47.203 ************************************ 00:08:47.203 01:11:22 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:47.203 01:11:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:47.203 01:11:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.203 01:11:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:47.203 ************************************ 00:08:47.203 START TEST nvmf_rpc 00:08:47.203 ************************************ 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:47.204 * Looking for test storage... 00:08:47.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.204 01:11:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:53.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:53.787 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:53.787 Found net devices under 0000:af:00.0: cvl_0_0 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:53.787 Found net devices under 0000:af:00.1: cvl_0_1 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.787 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:53.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:08:53.788 00:08:53.788 --- 10.0.0.2 ping statistics --- 00:08:53.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.788 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:08:53.788 00:08:53.788 --- 10.0.0.1 ping statistics --- 00:08:53.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.788 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3973064 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3973064 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 3973064 ']' 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:53.788 01:11:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:53.788 [2024-05-15 01:11:29.463241] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:08:53.788 [2024-05-15 01:11:29.463287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.047 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.047 [2024-05-15 01:11:29.535544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.047 [2024-05-15 01:11:29.604221] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.047 [2024-05-15 01:11:29.604261] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.047 [2024-05-15 01:11:29.604270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.047 [2024-05-15 01:11:29.604278] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.047 [2024-05-15 01:11:29.604301] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.047 [2024-05-15 01:11:29.604400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.047 [2024-05-15 01:11:29.604419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.047 [2024-05-15 01:11:29.604520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.047 [2024-05-15 01:11:29.604518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.622 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:54.622 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:08:54.622 01:11:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.622 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.622 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:08:54.881 "tick_rate": 2500000000, 00:08:54.881 "poll_groups": [ 00:08:54.881 { 00:08:54.881 "name": "nvmf_tgt_poll_group_000", 00:08:54.881 "admin_qpairs": 0, 00:08:54.881 "io_qpairs": 0, 00:08:54.881 "current_admin_qpairs": 0, 00:08:54.881 "current_io_qpairs": 0, 00:08:54.881 "pending_bdev_io": 0, 00:08:54.881 "completed_nvme_io": 0, 00:08:54.881 "transports": [] 00:08:54.881 }, 00:08:54.881 { 00:08:54.881 "name": "nvmf_tgt_poll_group_001", 00:08:54.881 "admin_qpairs": 0, 00:08:54.881 "io_qpairs": 0, 00:08:54.881 "current_admin_qpairs": 0, 00:08:54.881 "current_io_qpairs": 0, 00:08:54.881 "pending_bdev_io": 0, 00:08:54.881 "completed_nvme_io": 0, 00:08:54.881 "transports": [] 00:08:54.881 }, 00:08:54.881 { 00:08:54.881 "name": "nvmf_tgt_poll_group_002", 00:08:54.881 "admin_qpairs": 0, 00:08:54.881 "io_qpairs": 0, 00:08:54.881 "current_admin_qpairs": 0, 00:08:54.881 "current_io_qpairs": 0, 00:08:54.881 "pending_bdev_io": 0, 00:08:54.881 "completed_nvme_io": 0, 00:08:54.881 "transports": [] 00:08:54.881 }, 00:08:54.881 { 00:08:54.881 "name": "nvmf_tgt_poll_group_003", 00:08:54.881 "admin_qpairs": 0, 00:08:54.881 "io_qpairs": 0, 00:08:54.881 "current_admin_qpairs": 0, 00:08:54.881 "current_io_qpairs": 0, 00:08:54.881 "pending_bdev_io": 0, 00:08:54.881 "completed_nvme_io": 0, 00:08:54.881 "transports": [] 00:08:54.881 } 00:08:54.881 ] 00:08:54.881 }' 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.881 [2024-05-15 01:11:30.434382] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:54.881 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:08:54.881 "tick_rate": 2500000000, 00:08:54.881 "poll_groups": [ 00:08:54.881 { 00:08:54.881 "name": "nvmf_tgt_poll_group_000", 00:08:54.881 "admin_qpairs": 0, 00:08:54.881 "io_qpairs": 0, 00:08:54.881 "current_admin_qpairs": 0, 00:08:54.881 "current_io_qpairs": 0, 00:08:54.881 "pending_bdev_io": 0, 00:08:54.881 "completed_nvme_io": 0, 00:08:54.881 "transports": [ 00:08:54.881 { 00:08:54.881 "trtype": "TCP" 00:08:54.881 } 00:08:54.881 ] 00:08:54.881 }, 00:08:54.881 { 00:08:54.881 "name": "nvmf_tgt_poll_group_001", 00:08:54.881 "admin_qpairs": 0, 00:08:54.881 "io_qpairs": 0, 00:08:54.881 "current_admin_qpairs": 0, 00:08:54.881 "current_io_qpairs": 0, 00:08:54.881 "pending_bdev_io": 0, 00:08:54.882 "completed_nvme_io": 0, 00:08:54.882 "transports": [ 00:08:54.882 { 00:08:54.882 "trtype": "TCP" 00:08:54.882 } 00:08:54.882 ] 00:08:54.882 }, 00:08:54.882 { 00:08:54.882 "name": "nvmf_tgt_poll_group_002", 00:08:54.882 "admin_qpairs": 0, 00:08:54.882 "io_qpairs": 0, 00:08:54.882 "current_admin_qpairs": 0, 00:08:54.882 "current_io_qpairs": 0, 00:08:54.882 "pending_bdev_io": 0, 00:08:54.882 "completed_nvme_io": 0, 00:08:54.882 "transports": [ 00:08:54.882 { 00:08:54.882 "trtype": "TCP" 00:08:54.882 } 00:08:54.882 ] 00:08:54.882 }, 00:08:54.882 { 00:08:54.882 "name": "nvmf_tgt_poll_group_003", 00:08:54.882 "admin_qpairs": 0, 00:08:54.882 "io_qpairs": 0, 00:08:54.882 "current_admin_qpairs": 0, 00:08:54.882 "current_io_qpairs": 0, 00:08:54.882 "pending_bdev_io": 0, 00:08:54.882 "completed_nvme_io": 0, 00:08:54.882 "transports": [ 00:08:54.882 { 00:08:54.882 "trtype": "TCP" 00:08:54.882 } 00:08:54.882 ] 00:08:54.882 } 00:08:54.882 ] 00:08:54.882 }' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:54.882 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.141 Malloc1 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:08:55.141 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.142 [2024-05-15 01:11:30.613322] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:55.142 [2024-05-15 01:11:30.613637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:08:55.142 [2024-05-15 01:11:30.642379] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:55.142 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:55.142 could not add new controller: failed to write to nvme-fabrics device 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:55.142 01:11:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:56.521 01:11:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:08:56.521 01:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:08:56.521 01:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:08:56.521 01:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:08:56.521 01:11:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:08:58.427 01:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:58.686 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.686 [2024-05-15 01:11:34.211123] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:08:58.686 Failed to write to /dev/nvme-fabrics: Input/output error 00:08:58.686 could not add new controller: failed to write to nvme-fabrics device 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.686 01:11:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:00.066 01:11:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:00.066 01:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:00.066 01:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:00.066 01:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:00.066 01:11:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:01.973 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:01.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 [2024-05-15 01:11:37.733267] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.232 01:11:37 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:03.642 01:11:39 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:03.642 01:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:03.642 01:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:03.642 01:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:03.642 01:11:39 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:05.549 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.809 [2024-05-15 01:11:41.273464] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.809 01:11:41 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.188 01:11:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.188 01:11:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:07.188 01:11:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.188 01:11:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:07.188 01:11:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:09.094 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:09.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.353 [2024-05-15 01:11:44.923823] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.353 01:11:44 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:10.733 01:11:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:10.733 01:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:10.733 01:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:10.733 01:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:10.733 01:11:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:12.639 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:12.639 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:12.639 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:12.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.899 [2024-05-15 01:11:48.472728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:12.899 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:12.900 01:11:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:14.278 01:11:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:14.278 01:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:14.278 01:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:14.278 01:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:14.278 01:11:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:16.813 01:11:51 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:16.813 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 [2024-05-15 01:11:52.125450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.813 01:11:52 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:18.192 01:11:53 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:18.192 01:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:09:18.192 01:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.192 01:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:18.192 01:11:53 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 [2024-05-15 01:11:55.686752] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 [2024-05-15 01:11:55.734860] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.100 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 [2024-05-15 01:11:55.787013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 [2024-05-15 01:11:55.835181] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 [2024-05-15 01:11:55.883356] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:20.360 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:20.360 "tick_rate": 2500000000, 00:09:20.360 "poll_groups": [ 00:09:20.360 { 00:09:20.360 "name": "nvmf_tgt_poll_group_000", 00:09:20.360 "admin_qpairs": 2, 00:09:20.360 "io_qpairs": 196, 00:09:20.360 "current_admin_qpairs": 0, 00:09:20.360 "current_io_qpairs": 0, 00:09:20.360 "pending_bdev_io": 0, 00:09:20.360 "completed_nvme_io": 295, 00:09:20.360 "transports": [ 00:09:20.360 { 00:09:20.360 "trtype": "TCP" 00:09:20.360 } 00:09:20.360 ] 00:09:20.360 }, 00:09:20.360 { 00:09:20.360 "name": "nvmf_tgt_poll_group_001", 00:09:20.360 "admin_qpairs": 2, 00:09:20.360 "io_qpairs": 196, 00:09:20.360 "current_admin_qpairs": 0, 00:09:20.360 "current_io_qpairs": 0, 00:09:20.360 "pending_bdev_io": 0, 00:09:20.360 "completed_nvme_io": 263, 00:09:20.360 "transports": [ 00:09:20.360 { 00:09:20.360 "trtype": "TCP" 00:09:20.360 } 00:09:20.360 ] 00:09:20.360 }, 00:09:20.360 { 00:09:20.360 "name": "nvmf_tgt_poll_group_002", 00:09:20.360 "admin_qpairs": 1, 00:09:20.360 "io_qpairs": 196, 00:09:20.360 "current_admin_qpairs": 0, 00:09:20.360 "current_io_qpairs": 0, 00:09:20.360 "pending_bdev_io": 0, 00:09:20.361 "completed_nvme_io": 230, 00:09:20.361 "transports": [ 00:09:20.361 { 00:09:20.361 "trtype": "TCP" 00:09:20.361 } 00:09:20.361 ] 00:09:20.361 }, 00:09:20.361 { 00:09:20.361 "name": "nvmf_tgt_poll_group_003", 00:09:20.361 "admin_qpairs": 2, 00:09:20.361 "io_qpairs": 196, 00:09:20.361 "current_admin_qpairs": 0, 00:09:20.361 "current_io_qpairs": 0, 00:09:20.361 "pending_bdev_io": 0, 00:09:20.361 "completed_nvme_io": 346, 00:09:20.361 "transports": [ 00:09:20.361 { 00:09:20.361 "trtype": "TCP" 00:09:20.361 } 00:09:20.361 ] 00:09:20.361 } 00:09:20.361 ] 00:09:20.361 }' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:20.361 01:11:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.361 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.361 rmmod nvme_tcp 00:09:20.620 rmmod nvme_fabrics 00:09:20.620 rmmod nvme_keyring 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3973064 ']' 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3973064 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 3973064 ']' 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 3973064 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3973064 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3973064' 00:09:20.620 killing process with pid 3973064 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 3973064 00:09:20.620 [2024-05-15 01:11:56.151533] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:20.620 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 3973064 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.935 01:11:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.871 01:11:58 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.871 00:09:22.871 real 0m36.059s 00:09:22.871 user 1m47.687s 00:09:22.871 sys 0m8.266s 00:09:22.871 01:11:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:22.871 01:11:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.871 ************************************ 00:09:22.871 END TEST nvmf_rpc 00:09:22.871 ************************************ 00:09:22.871 01:11:58 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:22.871 01:11:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:22.871 01:11:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.871 01:11:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.871 ************************************ 00:09:22.871 START TEST nvmf_invalid 00:09:22.871 ************************************ 00:09:22.871 01:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:23.131 * Looking for test storage... 00:09:23.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:23.131 01:11:58 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:29.703 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:29.703 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:29.703 Found net devices under 0000:af:00.0: cvl_0_0 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:29.703 Found net devices under 0000:af:00.1: cvl_0_1 00:09:29.703 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:29.704 01:12:04 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:29.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:29.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:09:29.704 00:09:29.704 --- 10.0.0.2 ping statistics --- 00:09:29.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.704 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:29.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:29.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:09:29.704 00:09:29.704 --- 10.0.0.1 ping statistics --- 00:09:29.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:29.704 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3981857 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3981857 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 3981857 ']' 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:29.704 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:29.704 [2024-05-15 01:12:05.153792] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:09:29.704 [2024-05-15 01:12:05.153840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.704 EAL: No free 2048 kB hugepages reported on node 1 00:09:29.704 [2024-05-15 01:12:05.228047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:29.704 [2024-05-15 01:12:05.302407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.704 [2024-05-15 01:12:05.302441] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.704 [2024-05-15 01:12:05.302451] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.704 [2024-05-15 01:12:05.302459] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.704 [2024-05-15 01:12:05.302482] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.704 [2024-05-15 01:12:05.302529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:29.704 [2024-05-15 01:12:05.302641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.704 [2024-05-15 01:12:05.302729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.704 [2024-05-15 01:12:05.302730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.640 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:30.640 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:09:30.640 01:12:05 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.640 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:30.640 01:12:05 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode11601 00:09:30.640 [2024-05-15 01:12:06.170588] nvmf_rpc.c: 395:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:30.640 { 00:09:30.640 "nqn": "nqn.2016-06.io.spdk:cnode11601", 00:09:30.640 "tgt_name": "foobar", 00:09:30.640 "method": "nvmf_create_subsystem", 00:09:30.640 "req_id": 1 00:09:30.640 } 00:09:30.640 Got JSON-RPC error response 00:09:30.640 response: 00:09:30.640 { 00:09:30.640 "code": -32603, 00:09:30.640 "message": "Unable to find target foobar" 00:09:30.640 }' 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:30.640 { 00:09:30.640 "nqn": "nqn.2016-06.io.spdk:cnode11601", 00:09:30.640 "tgt_name": "foobar", 00:09:30.640 "method": "nvmf_create_subsystem", 00:09:30.640 "req_id": 1 00:09:30.640 } 00:09:30.640 Got JSON-RPC error response 00:09:30.640 response: 00:09:30.640 { 00:09:30.640 "code": -32603, 00:09:30.640 "message": "Unable to find target foobar" 00:09:30.640 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:30.640 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode22316 00:09:30.899 [2024-05-15 01:12:06.363347] nvmf_rpc.c: 412:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22316: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:30.899 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:30.899 { 00:09:30.899 "nqn": "nqn.2016-06.io.spdk:cnode22316", 00:09:30.899 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:30.899 "method": "nvmf_create_subsystem", 00:09:30.899 "req_id": 1 00:09:30.899 } 00:09:30.899 Got JSON-RPC error response 00:09:30.899 response: 00:09:30.899 { 00:09:30.899 "code": -32602, 00:09:30.899 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:30.899 }' 00:09:30.899 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:30.899 { 00:09:30.899 "nqn": "nqn.2016-06.io.spdk:cnode22316", 00:09:30.899 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:30.899 "method": "nvmf_create_subsystem", 00:09:30.899 "req_id": 1 00:09:30.899 } 00:09:30.899 Got JSON-RPC error response 00:09:30.899 response: 00:09:30.899 { 00:09:30.899 "code": -32602, 00:09:30.899 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:30.899 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:30.899 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:30.899 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode26045 00:09:30.899 [2024-05-15 01:12:06.555949] nvmf_rpc.c: 421:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26045: invalid model number 'SPDK_Controller' 00:09:30.899 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:30.899 { 00:09:30.899 "nqn": "nqn.2016-06.io.spdk:cnode26045", 00:09:30.899 "model_number": "SPDK_Controller\u001f", 00:09:30.899 "method": "nvmf_create_subsystem", 00:09:30.899 "req_id": 1 00:09:30.899 } 00:09:30.899 Got JSON-RPC error response 00:09:30.899 response: 00:09:30.899 { 00:09:30.899 "code": -32602, 00:09:30.899 "message": "Invalid MN SPDK_Controller\u001f" 00:09:30.899 }' 00:09:30.899 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:30.899 { 00:09:30.899 "nqn": "nqn.2016-06.io.spdk:cnode26045", 00:09:30.899 "model_number": "SPDK_Controller\u001f", 00:09:30.899 "method": "nvmf_create_subsystem", 00:09:30.899 "req_id": 1 00:09:30.899 } 00:09:30.899 Got JSON-RPC error response 00:09:30.899 response: 00:09:30.899 { 00:09:30.899 "code": -32602, 00:09:30.899 "message": "Invalid MN SPDK_Controller\u001f" 00:09:30.899 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.158 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ j == \- ]] 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'j?zG|PUG&`_.(t_3kPHoJ' 00:09:31.159 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'j?zG|PUG&`_.(t_3kPHoJ' nqn.2016-06.io.spdk:cnode2947 00:09:31.419 [2024-05-15 01:12:06.909139] nvmf_rpc.c: 412:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2947: invalid serial number 'j?zG|PUG&`_.(t_3kPHoJ' 00:09:31.419 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:31.419 { 00:09:31.419 "nqn": "nqn.2016-06.io.spdk:cnode2947", 00:09:31.419 "serial_number": "j?zG|PUG&`_.(t_3kPHoJ", 00:09:31.419 "method": "nvmf_create_subsystem", 00:09:31.419 "req_id": 1 00:09:31.419 } 00:09:31.419 Got JSON-RPC error response 00:09:31.419 response: 00:09:31.419 { 00:09:31.419 "code": -32602, 00:09:31.419 "message": "Invalid SN j?zG|PUG&`_.(t_3kPHoJ" 00:09:31.419 }' 00:09:31.419 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:31.419 { 00:09:31.419 "nqn": "nqn.2016-06.io.spdk:cnode2947", 00:09:31.419 "serial_number": "j?zG|PUG&`_.(t_3kPHoJ", 00:09:31.419 "method": "nvmf_create_subsystem", 00:09:31.419 "req_id": 1 00:09:31.419 } 00:09:31.419 Got JSON-RPC error response 00:09:31.419 response: 00:09:31.419 { 00:09:31.419 "code": -32602, 00:09:31.419 "message": "Invalid SN j?zG|PUG&`_.(t_3kPHoJ" 00:09:31.419 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:31.419 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:31.419 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:06 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.420 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:09:31.679 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'dulE" @D|WLj]W[x/zt"5`<,jZNvu+H3G(W:co02e' 00:09:31.680 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'dulE" @D|WLj]W[x/zt"5`<,jZNvu+H3G(W:co02e' nqn.2016-06.io.spdk:cnode9859 00:09:31.939 [2024-05-15 01:12:07.414874] nvmf_rpc.c: 421:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9859: invalid model number 'dulE" @D|WLj]W[x/zt"5`<,jZNvu+H3G(W:co02e' 00:09:31.939 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:31.939 { 00:09:31.939 "nqn": "nqn.2016-06.io.spdk:cnode9859", 00:09:31.939 "model_number": "dulE\" @D|WLj]W[x/zt\"5`<,jZNvu+H3G(W:co02e", 00:09:31.939 "method": "nvmf_create_subsystem", 00:09:31.939 "req_id": 1 00:09:31.939 } 00:09:31.939 Got JSON-RPC error response 00:09:31.939 response: 00:09:31.939 { 00:09:31.939 "code": -32602, 00:09:31.939 "message": "Invalid MN dulE\" @D|WLj]W[x/zt\"5`<,jZNvu+H3G(W:co02e" 00:09:31.939 }' 00:09:31.939 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:31.939 { 00:09:31.939 "nqn": "nqn.2016-06.io.spdk:cnode9859", 00:09:31.939 "model_number": "dulE\" @D|WLj]W[x/zt\"5`<,jZNvu+H3G(W:co02e", 00:09:31.939 "method": "nvmf_create_subsystem", 00:09:31.939 "req_id": 1 00:09:31.939 } 00:09:31.939 Got JSON-RPC error response 00:09:31.939 response: 00:09:31.939 { 00:09:31.939 "code": -32602, 00:09:31.939 "message": "Invalid MN dulE\" @D|WLj]W[x/zt\"5`<,jZNvu+H3G(W:co02e" 00:09:31.939 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:31.939 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:31.939 [2024-05-15 01:12:07.607600] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.197 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:32.197 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:32.197 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:09:32.197 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:32.197 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:09:32.197 01:12:07 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:32.455 [2024-05-15 01:12:07.992827] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:32.455 [2024-05-15 01:12:07.992890] nvmf_rpc.c: 793:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:32.455 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:32.455 { 00:09:32.455 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:32.455 "listen_address": { 00:09:32.455 "trtype": "tcp", 00:09:32.455 "traddr": "", 00:09:32.455 "trsvcid": "4421" 00:09:32.455 }, 00:09:32.455 "method": "nvmf_subsystem_remove_listener", 00:09:32.455 "req_id": 1 00:09:32.455 } 00:09:32.455 Got JSON-RPC error response 00:09:32.455 response: 00:09:32.455 { 00:09:32.455 "code": -32602, 00:09:32.455 "message": "Invalid parameters" 00:09:32.456 }' 00:09:32.456 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:32.456 { 00:09:32.456 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:32.456 "listen_address": { 00:09:32.456 "trtype": "tcp", 00:09:32.456 "traddr": "", 00:09:32.456 "trsvcid": "4421" 00:09:32.456 }, 00:09:32.456 "method": "nvmf_subsystem_remove_listener", 00:09:32.456 "req_id": 1 00:09:32.456 } 00:09:32.456 Got JSON-RPC error response 00:09:32.456 response: 00:09:32.456 { 00:09:32.456 "code": -32602, 00:09:32.456 "message": "Invalid parameters" 00:09:32.456 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:32.456 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22216 -i 0 00:09:32.714 [2024-05-15 01:12:08.181478] nvmf_rpc.c: 433:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22216: invalid cntlid range [0-65519] 00:09:32.714 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:32.714 { 00:09:32.714 "nqn": "nqn.2016-06.io.spdk:cnode22216", 00:09:32.714 "min_cntlid": 0, 00:09:32.714 "method": "nvmf_create_subsystem", 00:09:32.714 "req_id": 1 00:09:32.714 } 00:09:32.714 Got JSON-RPC error response 00:09:32.714 response: 00:09:32.714 { 00:09:32.714 "code": -32602, 00:09:32.714 "message": "Invalid cntlid range [0-65519]" 00:09:32.714 }' 00:09:32.714 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:32.714 { 00:09:32.714 "nqn": "nqn.2016-06.io.spdk:cnode22216", 00:09:32.714 "min_cntlid": 0, 00:09:32.714 "method": "nvmf_create_subsystem", 00:09:32.714 "req_id": 1 00:09:32.714 } 00:09:32.714 Got JSON-RPC error response 00:09:32.714 response: 00:09:32.714 { 00:09:32.714 "code": -32602, 00:09:32.714 "message": "Invalid cntlid range [0-65519]" 00:09:32.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.714 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18862 -i 65520 00:09:32.714 [2024-05-15 01:12:08.350085] nvmf_rpc.c: 433:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18862: invalid cntlid range [65520-65519] 00:09:32.714 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:32.714 { 00:09:32.714 "nqn": "nqn.2016-06.io.spdk:cnode18862", 00:09:32.714 "min_cntlid": 65520, 00:09:32.714 "method": "nvmf_create_subsystem", 00:09:32.714 "req_id": 1 00:09:32.714 } 00:09:32.714 Got JSON-RPC error response 00:09:32.714 response: 00:09:32.714 { 00:09:32.714 "code": -32602, 00:09:32.714 "message": "Invalid cntlid range [65520-65519]" 00:09:32.714 }' 00:09:32.714 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:32.714 { 00:09:32.714 "nqn": "nqn.2016-06.io.spdk:cnode18862", 00:09:32.714 "min_cntlid": 65520, 00:09:32.714 "method": "nvmf_create_subsystem", 00:09:32.714 "req_id": 1 00:09:32.714 } 00:09:32.714 Got JSON-RPC error response 00:09:32.714 response: 00:09:32.714 { 00:09:32.714 "code": -32602, 00:09:32.714 "message": "Invalid cntlid range [65520-65519]" 00:09:32.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.714 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11992 -I 0 00:09:32.973 [2024-05-15 01:12:08.522621] nvmf_rpc.c: 433:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11992: invalid cntlid range [1-0] 00:09:32.973 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:32.973 { 00:09:32.973 "nqn": "nqn.2016-06.io.spdk:cnode11992", 00:09:32.973 "max_cntlid": 0, 00:09:32.973 "method": "nvmf_create_subsystem", 00:09:32.973 "req_id": 1 00:09:32.973 } 00:09:32.973 Got JSON-RPC error response 00:09:32.973 response: 00:09:32.973 { 00:09:32.973 "code": -32602, 00:09:32.973 "message": "Invalid cntlid range [1-0]" 00:09:32.973 }' 00:09:32.973 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:32.973 { 00:09:32.973 "nqn": "nqn.2016-06.io.spdk:cnode11992", 00:09:32.973 "max_cntlid": 0, 00:09:32.973 "method": "nvmf_create_subsystem", 00:09:32.973 "req_id": 1 00:09:32.973 } 00:09:32.973 Got JSON-RPC error response 00:09:32.973 response: 00:09:32.973 { 00:09:32.973 "code": -32602, 00:09:32.973 "message": "Invalid cntlid range [1-0]" 00:09:32.973 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:32.973 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10383 -I 65520 00:09:33.231 [2024-05-15 01:12:08.691211] nvmf_rpc.c: 433:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10383: invalid cntlid range [1-65520] 00:09:33.231 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:33.231 { 00:09:33.231 "nqn": "nqn.2016-06.io.spdk:cnode10383", 00:09:33.232 "max_cntlid": 65520, 00:09:33.232 "method": "nvmf_create_subsystem", 00:09:33.232 "req_id": 1 00:09:33.232 } 00:09:33.232 Got JSON-RPC error response 00:09:33.232 response: 00:09:33.232 { 00:09:33.232 "code": -32602, 00:09:33.232 "message": "Invalid cntlid range [1-65520]" 00:09:33.232 }' 00:09:33.232 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:33.232 { 00:09:33.232 "nqn": "nqn.2016-06.io.spdk:cnode10383", 00:09:33.232 "max_cntlid": 65520, 00:09:33.232 "method": "nvmf_create_subsystem", 00:09:33.232 "req_id": 1 00:09:33.232 } 00:09:33.232 Got JSON-RPC error response 00:09:33.232 response: 00:09:33.232 { 00:09:33.232 "code": -32602, 00:09:33.232 "message": "Invalid cntlid range [1-65520]" 00:09:33.232 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.232 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26228 -i 6 -I 5 00:09:33.232 [2024-05-15 01:12:08.875849] nvmf_rpc.c: 433:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26228: invalid cntlid range [6-5] 00:09:33.232 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:33.232 { 00:09:33.232 "nqn": "nqn.2016-06.io.spdk:cnode26228", 00:09:33.232 "min_cntlid": 6, 00:09:33.232 "max_cntlid": 5, 00:09:33.232 "method": "nvmf_create_subsystem", 00:09:33.232 "req_id": 1 00:09:33.232 } 00:09:33.232 Got JSON-RPC error response 00:09:33.232 response: 00:09:33.232 { 00:09:33.232 "code": -32602, 00:09:33.232 "message": "Invalid cntlid range [6-5]" 00:09:33.232 }' 00:09:33.232 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:33.232 { 00:09:33.232 "nqn": "nqn.2016-06.io.spdk:cnode26228", 00:09:33.232 "min_cntlid": 6, 00:09:33.232 "max_cntlid": 5, 00:09:33.232 "method": "nvmf_create_subsystem", 00:09:33.232 "req_id": 1 00:09:33.232 } 00:09:33.232 Got JSON-RPC error response 00:09:33.232 response: 00:09:33.232 { 00:09:33.232 "code": -32602, 00:09:33.232 "message": "Invalid cntlid range [6-5]" 00:09:33.232 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:33.232 01:12:08 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:33.491 { 00:09:33.491 "name": "foobar", 00:09:33.491 "method": "nvmf_delete_target", 00:09:33.491 "req_id": 1 00:09:33.491 } 00:09:33.491 Got JSON-RPC error response 00:09:33.491 response: 00:09:33.491 { 00:09:33.491 "code": -32602, 00:09:33.491 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:33.491 }' 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:33.491 { 00:09:33.491 "name": "foobar", 00:09:33.491 "method": "nvmf_delete_target", 00:09:33.491 "req_id": 1 00:09:33.491 } 00:09:33.491 Got JSON-RPC error response 00:09:33.491 response: 00:09:33.491 { 00:09:33.491 "code": -32602, 00:09:33.491 "message": "The specified target doesn't exist, cannot delete it." 00:09:33.491 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:33.491 rmmod nvme_tcp 00:09:33.491 rmmod nvme_fabrics 00:09:33.491 rmmod nvme_keyring 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3981857 ']' 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3981857 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 3981857 ']' 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 3981857 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3981857 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3981857' 00:09:33.491 killing process with pid 3981857 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 3981857 00:09:33.491 [2024-05-15 01:12:09.137726] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:33.491 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 3981857 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.750 01:12:09 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.285 01:12:11 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.285 00:09:36.285 real 0m12.882s 00:09:36.285 user 0m19.964s 00:09:36.285 sys 0m6.025s 00:09:36.285 01:12:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.285 01:12:11 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:36.285 ************************************ 00:09:36.285 END TEST nvmf_invalid 00:09:36.285 ************************************ 00:09:36.285 01:12:11 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:36.285 01:12:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:36.285 01:12:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.285 01:12:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:36.285 ************************************ 00:09:36.285 START TEST nvmf_abort 00:09:36.285 ************************************ 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:36.285 * Looking for test storage... 00:09:36.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.285 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.286 01:12:11 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:42.892 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:42.892 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:42.892 Found net devices under 0000:af:00.0: cvl_0_0 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:42.892 Found net devices under 0000:af:00.1: cvl_0_1 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:42.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:09:42.892 00:09:42.892 --- 10.0.0.2 ping statistics --- 00:09:42.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.892 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:09:42.892 00:09:42.892 --- 10.0.0.1 ping statistics --- 00:09:42.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.892 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3986545 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3986545 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:42.892 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 3986545 ']' 00:09:42.893 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.893 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:42.893 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.893 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:42.893 01:12:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:42.893 [2024-05-15 01:12:18.571577] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:09:42.893 [2024-05-15 01:12:18.571629] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.153 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.153 [2024-05-15 01:12:18.647487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:43.153 [2024-05-15 01:12:18.721653] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.153 [2024-05-15 01:12:18.721688] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.153 [2024-05-15 01:12:18.721698] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.153 [2024-05-15 01:12:18.721706] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.153 [2024-05-15 01:12:18.721730] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.153 [2024-05-15 01:12:18.721831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:43.153 [2024-05-15 01:12:18.721919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:43.153 [2024-05-15 01:12:18.721921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.722 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:43.722 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:09:43.722 01:12:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:43.722 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.722 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 [2024-05-15 01:12:19.425449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 Malloc0 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 Delay0 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 [2024-05-15 01:12:19.506374] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:43.983 [2024-05-15 01:12:19.506622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:43.983 01:12:19 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:43.983 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.983 [2024-05-15 01:12:19.615665] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:46.523 Initializing NVMe Controllers 00:09:46.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:46.523 controller IO queue size 128 less than required 00:09:46.523 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:46.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:46.523 Initialization complete. Launching workers. 00:09:46.523 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41238 00:09:46.523 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41299, failed to submit 62 00:09:46.523 success 41242, unsuccess 57, failed 0 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.523 rmmod nvme_tcp 00:09:46.523 rmmod nvme_fabrics 00:09:46.523 rmmod nvme_keyring 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3986545 ']' 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3986545 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 3986545 ']' 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 3986545 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3986545 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3986545' 00:09:46.523 killing process with pid 3986545 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 3986545 00:09:46.523 [2024-05-15 01:12:21.848644] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:46.523 01:12:21 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 3986545 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:46.523 01:12:22 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.062 01:12:24 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.062 00:09:49.062 real 0m12.643s 00:09:49.062 user 0m13.332s 00:09:49.062 sys 0m6.480s 00:09:49.062 01:12:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:49.062 01:12:24 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:49.062 ************************************ 00:09:49.062 END TEST nvmf_abort 00:09:49.062 ************************************ 00:09:49.062 01:12:24 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:49.062 01:12:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:49.062 01:12:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:49.062 01:12:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:49.062 ************************************ 00:09:49.062 START TEST nvmf_ns_hotplug_stress 00:09:49.062 ************************************ 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:49.062 * Looking for test storage... 00:09:49.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.062 01:12:24 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:55.637 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:55.637 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:55.637 Found net devices under 0000:af:00.0: cvl_0_0 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:55.637 Found net devices under 0000:af:00.1: cvl_0_1 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.637 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:09:55.638 00:09:55.638 --- 10.0.0.2 ping statistics --- 00:09:55.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.638 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:09:55.638 00:09:55.638 --- 10.0.0.1 ping statistics --- 00:09:55.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.638 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3990808 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3990808 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 3990808 ']' 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:55.638 01:12:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:55.638 [2024-05-15 01:12:30.939820] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:09:55.638 [2024-05-15 01:12:30.939865] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.638 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.638 [2024-05-15 01:12:31.013471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.638 [2024-05-15 01:12:31.086978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.638 [2024-05-15 01:12:31.087016] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.638 [2024-05-15 01:12:31.087026] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.638 [2024-05-15 01:12:31.087034] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.638 [2024-05-15 01:12:31.087056] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.638 [2024-05-15 01:12:31.087160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.638 [2024-05-15 01:12:31.087252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.638 [2024-05-15 01:12:31.087256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:56.207 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:56.467 [2024-05-15 01:12:31.944037] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.467 01:12:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:56.467 01:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.727 [2024-05-15 01:12:32.309627] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:56.727 [2024-05-15 01:12:32.309859] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.727 01:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:56.987 01:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:57.246 Malloc0 00:09:57.246 01:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:57.246 Delay0 00:09:57.246 01:12:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.506 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:57.765 NULL1 00:09:57.765 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:57.765 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:57.765 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3991254 00:09:57.765 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:09:57.765 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.765 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.023 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.281 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:58.281 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:58.281 true 00:09:58.281 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:09:58.281 01:12:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.541 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:58.800 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:58.800 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:58.800 true 00:09:59.059 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:09:59.059 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.059 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.318 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:59.318 01:12:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:59.577 true 00:09:59.577 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:09:59.577 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:59.577 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:59.836 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:59.836 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:00.096 true 00:10:00.096 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:00.096 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.354 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.354 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:00.354 01:12:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:00.613 true 00:10:00.613 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:00.613 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:00.871 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.872 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:00.872 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:01.130 true 00:10:01.130 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:01.130 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.389 01:12:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.389 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:01.389 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:01.672 true 00:10:01.672 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:01.672 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.936 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.196 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:02.196 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:02.196 true 00:10:02.196 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:02.196 01:12:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.455 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:02.714 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:02.714 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:02.714 true 00:10:02.973 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:02.973 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.973 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.232 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:03.232 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:03.491 true 00:10:03.491 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:03.491 01:12:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.491 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.750 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:03.750 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:04.009 true 00:10:04.009 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:04.009 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.267 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.267 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:04.268 01:12:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:04.527 true 00:10:04.527 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:04.527 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.785 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.045 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:05.045 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:05.045 true 00:10:05.045 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:05.045 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.303 01:12:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.561 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:05.561 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:05.819 true 00:10:05.819 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:05.819 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.819 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.311 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:06.311 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:06.311 true 00:10:06.311 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:06.311 01:12:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.570 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.829 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:06.829 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:06.829 true 00:10:06.829 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:06.829 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.088 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.347 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:07.347 01:12:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:07.347 true 00:10:07.347 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:07.347 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.606 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.865 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:07.865 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:08.124 true 00:10:08.124 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:08.124 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.124 01:12:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.383 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:08.383 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:08.642 true 00:10:08.642 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:08.642 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.902 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:08.902 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:08.902 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:09.161 true 00:10:09.161 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:09.161 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.419 01:12:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.678 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:09.678 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:09.678 true 00:10:09.678 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:09.678 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.936 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.195 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:10.195 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:10.455 true 00:10:10.455 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:10.455 01:12:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.455 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.714 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:10.714 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:10.973 true 00:10:10.973 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:10.973 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.232 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.232 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:11.232 01:12:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:11.492 true 00:10:11.492 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:11.492 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.751 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.011 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:12.011 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:12.011 true 00:10:12.011 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:12.011 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.270 01:12:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.530 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:12.530 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:12.530 true 00:10:12.789 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:12.789 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.789 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.048 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:13.048 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:13.307 true 00:10:13.307 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:13.307 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.566 01:12:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.566 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:13.566 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:13.877 true 00:10:13.877 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:13.877 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.136 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.137 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:14.137 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:14.395 true 00:10:14.395 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:14.395 01:12:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.653 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.653 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:14.653 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:14.912 true 00:10:14.912 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:14.912 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.171 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.431 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:15.431 01:12:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:15.431 true 00:10:15.431 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:15.431 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:15.690 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.949 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:15.949 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:15.949 true 00:10:16.208 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:16.208 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.208 01:12:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.467 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:16.467 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:16.726 true 00:10:16.726 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:16.726 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.985 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.985 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:16.985 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:17.244 true 00:10:17.244 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:17.244 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.503 01:12:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.503 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:10:17.503 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:10:17.761 true 00:10:17.761 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:17.761 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.019 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.278 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:10:18.278 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:10:18.278 true 00:10:18.278 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:18.278 01:12:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.537 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.796 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:10:18.796 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:10:19.055 true 00:10:19.055 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:19.055 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.055 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.313 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:10:19.313 01:12:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:10:19.572 true 00:10:19.572 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:19.572 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.830 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.830 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:10:19.830 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:10:20.089 true 00:10:20.089 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:20.089 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.348 01:12:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.607 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:10:20.607 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:10:20.607 true 00:10:20.607 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:20.607 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.866 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.124 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:10:21.124 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:10:21.124 true 00:10:21.124 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:21.124 01:12:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.383 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.642 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:10:21.642 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:10:21.901 true 00:10:21.901 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:21.901 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.901 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.160 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:10:22.160 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:10:22.419 true 00:10:22.419 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:22.419 01:12:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.678 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.678 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:10:22.678 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:10:22.937 true 00:10:22.937 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:22.937 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.195 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.454 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:10:23.454 01:12:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:10:23.454 true 00:10:23.454 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:23.454 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.712 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.972 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:10:23.972 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:10:24.231 true 00:10:24.231 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:24.231 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.231 01:12:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.490 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:10:24.490 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:10:24.749 true 00:10:24.749 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:24.749 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.008 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.008 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:10:25.008 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:10:25.267 true 00:10:25.267 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:25.267 01:13:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.526 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.784 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:10:25.784 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:10:25.784 true 00:10:25.784 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:25.784 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.043 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.313 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:10:26.313 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:10:26.313 true 00:10:26.611 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:26.611 01:13:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:26.611 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:26.872 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:10:26.872 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:10:27.132 true 00:10:27.132 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:27.132 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.132 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.391 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:10:27.391 01:13:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:10:27.651 true 00:10:27.651 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:27.651 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.910 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.910 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:10:27.910 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:10:28.169 Initializing NVMe Controllers 00:10:28.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:28.169 Controller IO queue size 128, less than required. 00:10:28.169 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:28.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:28.169 Initialization complete. Launching workers. 00:10:28.169 ======================================================== 00:10:28.169 Latency(us) 00:10:28.169 Device Information : IOPS MiB/s Average min max 00:10:28.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27268.17 13.31 4694.10 2027.90 9693.69 00:10:28.169 ======================================================== 00:10:28.169 Total : 27268.17 13.31 4694.10 2027.90 9693.69 00:10:28.169 00:10:28.169 true 00:10:28.169 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3991254 00:10:28.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3991254) - No such process 00:10:28.169 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3991254 00:10:28.169 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.428 01:13:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:28.686 null0 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:28.686 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:28.945 null1 00:10:28.945 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:28.945 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:28.945 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:29.204 null2 00:10:29.204 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.204 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.204 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:29.204 null3 00:10:29.204 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.204 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.204 01:13:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:29.463 null4 00:10:29.463 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.463 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.463 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:29.722 null5 00:10:29.722 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.722 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.722 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:29.722 null6 00:10:29.722 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.722 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.722 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:29.981 null7 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:29.981 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3996986 3996987 3996989 3996992 3996993 3996995 3996996 3996998 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:29.982 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.241 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.500 01:13:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:30.500 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:30.759 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.028 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.295 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.295 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.295 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.295 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.295 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.296 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.296 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.296 01:13:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.554 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:31.813 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:31.813 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:31.814 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.073 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.332 01:13:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.332 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.332 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:32.332 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.332 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.332 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:32.592 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:32.852 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:33.112 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:33.371 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:33.372 01:13:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.630 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.888 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.888 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.888 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.889 rmmod nvme_tcp 00:10:33.889 rmmod nvme_fabrics 00:10:33.889 rmmod nvme_keyring 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3990808 ']' 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3990808 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 3990808 ']' 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 3990808 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 3990808 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 3990808' 00:10:33.889 killing process with pid 3990808 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 3990808 00:10:33.889 [2024-05-15 01:13:09.483142] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:33.889 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 3990808 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:34.148 01:13:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.679 01:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.679 00:10:36.679 real 0m47.539s 00:10:36.679 user 3m11.055s 00:10:36.679 sys 0m22.973s 00:10:36.679 01:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:36.679 01:13:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.679 ************************************ 00:10:36.679 END TEST nvmf_ns_hotplug_stress 00:10:36.679 ************************************ 00:10:36.679 01:13:11 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.679 01:13:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:36.679 01:13:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:36.679 01:13:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.679 ************************************ 00:10:36.679 START TEST nvmf_connect_stress 00:10:36.679 ************************************ 00:10:36.679 01:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:36.679 * Looking for test storage... 00:10:36.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.679 01:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.679 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:36.679 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:36.680 01:13:11 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.680 01:13:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:36.680 01:13:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:36.680 01:13:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.680 01:13:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.246 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:43.247 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:43.247 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:43.247 Found net devices under 0000:af:00.0: cvl_0_0 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:43.247 Found net devices under 0000:af:00.1: cvl_0_1 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.247 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:10:43.507 00:10:43.507 --- 10.0.0.2 ping statistics --- 00:10:43.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.507 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:10:43.507 00:10:43.507 --- 10.0.0.1 ping statistics --- 00:10:43.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.507 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=4001565 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 4001565 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 4001565 ']' 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:43.507 01:13:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:43.507 [2024-05-15 01:13:19.039842] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:10:43.507 [2024-05-15 01:13:19.039890] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.507 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.507 [2024-05-15 01:13:19.111709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:43.507 [2024-05-15 01:13:19.184368] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.507 [2024-05-15 01:13:19.184407] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.507 [2024-05-15 01:13:19.184417] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.507 [2024-05-15 01:13:19.184426] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.507 [2024-05-15 01:13:19.184433] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.507 [2024-05-15 01:13:19.184536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.507 [2024-05-15 01:13:19.184638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.507 [2024-05-15 01:13:19.184640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.172 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:44.172 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:10:44.172 01:13:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.172 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.172 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.432 [2024-05-15 01:13:19.885143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.432 [2024-05-15 01:13:19.901572] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:44.432 [2024-05-15 01:13:19.917348] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.432 NULL1 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=4001666 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 EAL: No free 2048 kB hugepages reported on node 1 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.432 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:44.692 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.692 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:44.692 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:44.692 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.692 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.259 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.259 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:45.259 01:13:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.259 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.259 01:13:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.517 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.517 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:45.517 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.517 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.517 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:45.776 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:45.776 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:45.776 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:45.776 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:45.776 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.035 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.035 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:46.035 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.035 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.035 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.293 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.293 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:46.293 01:13:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.293 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.293 01:13:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.861 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.861 01:13:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:46.861 01:13:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:46.861 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.861 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.120 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.120 01:13:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:47.120 01:13:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.120 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.120 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.380 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.380 01:13:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:47.380 01:13:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.380 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.380 01:13:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.639 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.639 01:13:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:47.639 01:13:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.639 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.639 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.206 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.206 01:13:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:48.206 01:13:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.206 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.206 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.465 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.465 01:13:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:48.465 01:13:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.465 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.465 01:13:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.724 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.724 01:13:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:48.724 01:13:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.724 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.724 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.982 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.982 01:13:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:48.982 01:13:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.982 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.982 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.241 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.241 01:13:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:49.241 01:13:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.241 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.241 01:13:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.808 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.808 01:13:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:49.808 01:13:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.808 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.808 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.067 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.067 01:13:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:50.067 01:13:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.067 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.067 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.326 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.326 01:13:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:50.326 01:13:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.326 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.326 01:13:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.584 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.584 01:13:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:50.584 01:13:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.584 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.584 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.842 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.842 01:13:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:50.842 01:13:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.842 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.842 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.410 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.410 01:13:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:51.410 01:13:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.410 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.410 01:13:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.667 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.667 01:13:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:51.667 01:13:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.667 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.667 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.925 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.926 01:13:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:51.926 01:13:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.926 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.926 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.184 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.184 01:13:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:52.184 01:13:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.184 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.184 01:13:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.752 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.752 01:13:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:52.752 01:13:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.752 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.752 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.011 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.011 01:13:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:53.011 01:13:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.011 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.011 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.270 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.270 01:13:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:53.270 01:13:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.270 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.270 01:13:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.529 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.529 01:13:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:53.529 01:13:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.529 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.529 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.787 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.787 01:13:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:53.787 01:13:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.787 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.787 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.354 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.354 01:13:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:54.354 01:13:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.354 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.354 01:13:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.613 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 4001666 00:10:54.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (4001666) - No such process 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 4001666 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:54.613 rmmod nvme_tcp 00:10:54.613 rmmod nvme_fabrics 00:10:54.613 rmmod nvme_keyring 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 4001565 ']' 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 4001565 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 4001565 ']' 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 4001565 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4001565 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4001565' 00:10:54.613 killing process with pid 4001565 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 4001565 00:10:54.613 [2024-05-15 01:13:30.239859] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:54.613 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 4001565 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:54.871 01:13:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.408 01:13:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.408 00:10:57.408 real 0m20.659s 00:10:57.408 user 0m40.528s 00:10:57.408 sys 0m10.355s 00:10:57.408 01:13:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:57.408 01:13:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.408 ************************************ 00:10:57.408 END TEST nvmf_connect_stress 00:10:57.408 ************************************ 00:10:57.408 01:13:32 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.408 01:13:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:57.408 01:13:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:57.408 01:13:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.408 ************************************ 00:10:57.408 START TEST nvmf_fused_ordering 00:10:57.408 ************************************ 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:10:57.408 * Looking for test storage... 00:10:57.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.408 01:13:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.409 01:13:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:04.030 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:04.030 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:04.030 Found net devices under 0000:af:00.0: cvl_0_0 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:04.030 Found net devices under 0000:af:00.1: cvl_0_1 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.030 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:04.031 00:11:04.031 --- 10.0.0.2 ping statistics --- 00:11:04.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.031 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:04.031 00:11:04.031 --- 10.0.0.1 ping statistics --- 00:11:04.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.031 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=4007232 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 4007232 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 4007232 ']' 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:04.031 01:13:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.031 [2024-05-15 01:13:39.501376] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:11:04.031 [2024-05-15 01:13:39.501424] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.031 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.031 [2024-05-15 01:13:39.576039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.031 [2024-05-15 01:13:39.647614] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.031 [2024-05-15 01:13:39.647649] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.031 [2024-05-15 01:13:39.647658] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.031 [2024-05-15 01:13:39.647667] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.031 [2024-05-15 01:13:39.647675] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.031 [2024-05-15 01:13:39.647694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.966 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:04.966 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:11:04.966 01:13:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.966 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.966 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.966 01:13:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.967 [2024-05-15 01:13:40.349717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.967 [2024-05-15 01:13:40.365701] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:04.967 [2024-05-15 01:13:40.365899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.967 NULL1 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:04.967 01:13:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:04.967 [2024-05-15 01:13:40.421574] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:11:04.967 [2024-05-15 01:13:40.421617] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4007281 ] 00:11:04.967 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.535 Attached to nqn.2016-06.io.spdk:cnode1 00:11:05.535 Namespace ID: 1 size: 1GB 00:11:05.535 fused_ordering(0) 00:11:05.535 fused_ordering(1) 00:11:05.535 fused_ordering(2) 00:11:05.535 fused_ordering(3) 00:11:05.535 fused_ordering(4) 00:11:05.535 fused_ordering(5) 00:11:05.535 fused_ordering(6) 00:11:05.535 fused_ordering(7) 00:11:05.535 fused_ordering(8) 00:11:05.535 fused_ordering(9) 00:11:05.535 fused_ordering(10) 00:11:05.535 fused_ordering(11) 00:11:05.535 fused_ordering(12) 00:11:05.535 fused_ordering(13) 00:11:05.535 fused_ordering(14) 00:11:05.535 fused_ordering(15) 00:11:05.535 fused_ordering(16) 00:11:05.535 fused_ordering(17) 00:11:05.535 fused_ordering(18) 00:11:05.535 fused_ordering(19) 00:11:05.535 fused_ordering(20) 00:11:05.535 fused_ordering(21) 00:11:05.535 fused_ordering(22) 00:11:05.535 fused_ordering(23) 00:11:05.535 fused_ordering(24) 00:11:05.535 fused_ordering(25) 00:11:05.535 fused_ordering(26) 00:11:05.535 fused_ordering(27) 00:11:05.535 fused_ordering(28) 00:11:05.535 fused_ordering(29) 00:11:05.535 fused_ordering(30) 00:11:05.535 fused_ordering(31) 00:11:05.535 fused_ordering(32) 00:11:05.535 fused_ordering(33) 00:11:05.535 fused_ordering(34) 00:11:05.535 fused_ordering(35) 00:11:05.535 fused_ordering(36) 00:11:05.535 fused_ordering(37) 00:11:05.535 fused_ordering(38) 00:11:05.535 fused_ordering(39) 00:11:05.535 fused_ordering(40) 00:11:05.535 fused_ordering(41) 00:11:05.535 fused_ordering(42) 00:11:05.535 fused_ordering(43) 00:11:05.535 fused_ordering(44) 00:11:05.535 fused_ordering(45) 00:11:05.535 fused_ordering(46) 00:11:05.535 fused_ordering(47) 00:11:05.535 fused_ordering(48) 00:11:05.535 fused_ordering(49) 00:11:05.535 fused_ordering(50) 00:11:05.535 fused_ordering(51) 00:11:05.535 fused_ordering(52) 00:11:05.535 fused_ordering(53) 00:11:05.535 fused_ordering(54) 00:11:05.535 fused_ordering(55) 00:11:05.535 fused_ordering(56) 00:11:05.535 fused_ordering(57) 00:11:05.535 fused_ordering(58) 00:11:05.535 fused_ordering(59) 00:11:05.535 fused_ordering(60) 00:11:05.535 fused_ordering(61) 00:11:05.535 fused_ordering(62) 00:11:05.535 fused_ordering(63) 00:11:05.535 fused_ordering(64) 00:11:05.535 fused_ordering(65) 00:11:05.535 fused_ordering(66) 00:11:05.535 fused_ordering(67) 00:11:05.535 fused_ordering(68) 00:11:05.535 fused_ordering(69) 00:11:05.535 fused_ordering(70) 00:11:05.535 fused_ordering(71) 00:11:05.535 fused_ordering(72) 00:11:05.535 fused_ordering(73) 00:11:05.535 fused_ordering(74) 00:11:05.535 fused_ordering(75) 00:11:05.535 fused_ordering(76) 00:11:05.536 fused_ordering(77) 00:11:05.536 fused_ordering(78) 00:11:05.536 fused_ordering(79) 00:11:05.536 fused_ordering(80) 00:11:05.536 fused_ordering(81) 00:11:05.536 fused_ordering(82) 00:11:05.536 fused_ordering(83) 00:11:05.536 fused_ordering(84) 00:11:05.536 fused_ordering(85) 00:11:05.536 fused_ordering(86) 00:11:05.536 fused_ordering(87) 00:11:05.536 fused_ordering(88) 00:11:05.536 fused_ordering(89) 00:11:05.536 fused_ordering(90) 00:11:05.536 fused_ordering(91) 00:11:05.536 fused_ordering(92) 00:11:05.536 fused_ordering(93) 00:11:05.536 fused_ordering(94) 00:11:05.536 fused_ordering(95) 00:11:05.536 fused_ordering(96) 00:11:05.536 fused_ordering(97) 00:11:05.536 fused_ordering(98) 00:11:05.536 fused_ordering(99) 00:11:05.536 fused_ordering(100) 00:11:05.536 fused_ordering(101) 00:11:05.536 fused_ordering(102) 00:11:05.536 fused_ordering(103) 00:11:05.536 fused_ordering(104) 00:11:05.536 fused_ordering(105) 00:11:05.536 fused_ordering(106) 00:11:05.536 fused_ordering(107) 00:11:05.536 fused_ordering(108) 00:11:05.536 fused_ordering(109) 00:11:05.536 fused_ordering(110) 00:11:05.536 fused_ordering(111) 00:11:05.536 fused_ordering(112) 00:11:05.536 fused_ordering(113) 00:11:05.536 fused_ordering(114) 00:11:05.536 fused_ordering(115) 00:11:05.536 fused_ordering(116) 00:11:05.536 fused_ordering(117) 00:11:05.536 fused_ordering(118) 00:11:05.536 fused_ordering(119) 00:11:05.536 fused_ordering(120) 00:11:05.536 fused_ordering(121) 00:11:05.536 fused_ordering(122) 00:11:05.536 fused_ordering(123) 00:11:05.536 fused_ordering(124) 00:11:05.536 fused_ordering(125) 00:11:05.536 fused_ordering(126) 00:11:05.536 fused_ordering(127) 00:11:05.536 fused_ordering(128) 00:11:05.536 fused_ordering(129) 00:11:05.536 fused_ordering(130) 00:11:05.536 fused_ordering(131) 00:11:05.536 fused_ordering(132) 00:11:05.536 fused_ordering(133) 00:11:05.536 fused_ordering(134) 00:11:05.536 fused_ordering(135) 00:11:05.536 fused_ordering(136) 00:11:05.536 fused_ordering(137) 00:11:05.536 fused_ordering(138) 00:11:05.536 fused_ordering(139) 00:11:05.536 fused_ordering(140) 00:11:05.536 fused_ordering(141) 00:11:05.536 fused_ordering(142) 00:11:05.536 fused_ordering(143) 00:11:05.536 fused_ordering(144) 00:11:05.536 fused_ordering(145) 00:11:05.536 fused_ordering(146) 00:11:05.536 fused_ordering(147) 00:11:05.536 fused_ordering(148) 00:11:05.536 fused_ordering(149) 00:11:05.536 fused_ordering(150) 00:11:05.536 fused_ordering(151) 00:11:05.536 fused_ordering(152) 00:11:05.536 fused_ordering(153) 00:11:05.536 fused_ordering(154) 00:11:05.536 fused_ordering(155) 00:11:05.536 fused_ordering(156) 00:11:05.536 fused_ordering(157) 00:11:05.536 fused_ordering(158) 00:11:05.536 fused_ordering(159) 00:11:05.536 fused_ordering(160) 00:11:05.536 fused_ordering(161) 00:11:05.536 fused_ordering(162) 00:11:05.536 fused_ordering(163) 00:11:05.536 fused_ordering(164) 00:11:05.536 fused_ordering(165) 00:11:05.536 fused_ordering(166) 00:11:05.536 fused_ordering(167) 00:11:05.536 fused_ordering(168) 00:11:05.536 fused_ordering(169) 00:11:05.536 fused_ordering(170) 00:11:05.536 fused_ordering(171) 00:11:05.536 fused_ordering(172) 00:11:05.536 fused_ordering(173) 00:11:05.536 fused_ordering(174) 00:11:05.536 fused_ordering(175) 00:11:05.536 fused_ordering(176) 00:11:05.536 fused_ordering(177) 00:11:05.536 fused_ordering(178) 00:11:05.536 fused_ordering(179) 00:11:05.536 fused_ordering(180) 00:11:05.536 fused_ordering(181) 00:11:05.536 fused_ordering(182) 00:11:05.536 fused_ordering(183) 00:11:05.536 fused_ordering(184) 00:11:05.536 fused_ordering(185) 00:11:05.536 fused_ordering(186) 00:11:05.536 fused_ordering(187) 00:11:05.536 fused_ordering(188) 00:11:05.536 fused_ordering(189) 00:11:05.536 fused_ordering(190) 00:11:05.536 fused_ordering(191) 00:11:05.536 fused_ordering(192) 00:11:05.536 fused_ordering(193) 00:11:05.536 fused_ordering(194) 00:11:05.536 fused_ordering(195) 00:11:05.536 fused_ordering(196) 00:11:05.536 fused_ordering(197) 00:11:05.536 fused_ordering(198) 00:11:05.536 fused_ordering(199) 00:11:05.536 fused_ordering(200) 00:11:05.536 fused_ordering(201) 00:11:05.536 fused_ordering(202) 00:11:05.536 fused_ordering(203) 00:11:05.536 fused_ordering(204) 00:11:05.536 fused_ordering(205) 00:11:06.104 fused_ordering(206) 00:11:06.104 fused_ordering(207) 00:11:06.104 fused_ordering(208) 00:11:06.104 fused_ordering(209) 00:11:06.104 fused_ordering(210) 00:11:06.104 fused_ordering(211) 00:11:06.104 fused_ordering(212) 00:11:06.104 fused_ordering(213) 00:11:06.104 fused_ordering(214) 00:11:06.104 fused_ordering(215) 00:11:06.104 fused_ordering(216) 00:11:06.104 fused_ordering(217) 00:11:06.104 fused_ordering(218) 00:11:06.104 fused_ordering(219) 00:11:06.104 fused_ordering(220) 00:11:06.104 fused_ordering(221) 00:11:06.104 fused_ordering(222) 00:11:06.104 fused_ordering(223) 00:11:06.104 fused_ordering(224) 00:11:06.104 fused_ordering(225) 00:11:06.104 fused_ordering(226) 00:11:06.104 fused_ordering(227) 00:11:06.104 fused_ordering(228) 00:11:06.104 fused_ordering(229) 00:11:06.104 fused_ordering(230) 00:11:06.104 fused_ordering(231) 00:11:06.104 fused_ordering(232) 00:11:06.104 fused_ordering(233) 00:11:06.104 fused_ordering(234) 00:11:06.104 fused_ordering(235) 00:11:06.104 fused_ordering(236) 00:11:06.104 fused_ordering(237) 00:11:06.104 fused_ordering(238) 00:11:06.104 fused_ordering(239) 00:11:06.104 fused_ordering(240) 00:11:06.104 fused_ordering(241) 00:11:06.104 fused_ordering(242) 00:11:06.104 fused_ordering(243) 00:11:06.104 fused_ordering(244) 00:11:06.104 fused_ordering(245) 00:11:06.104 fused_ordering(246) 00:11:06.104 fused_ordering(247) 00:11:06.104 fused_ordering(248) 00:11:06.104 fused_ordering(249) 00:11:06.104 fused_ordering(250) 00:11:06.104 fused_ordering(251) 00:11:06.104 fused_ordering(252) 00:11:06.104 fused_ordering(253) 00:11:06.104 fused_ordering(254) 00:11:06.104 fused_ordering(255) 00:11:06.104 fused_ordering(256) 00:11:06.104 fused_ordering(257) 00:11:06.104 fused_ordering(258) 00:11:06.104 fused_ordering(259) 00:11:06.104 fused_ordering(260) 00:11:06.104 fused_ordering(261) 00:11:06.104 fused_ordering(262) 00:11:06.104 fused_ordering(263) 00:11:06.104 fused_ordering(264) 00:11:06.104 fused_ordering(265) 00:11:06.104 fused_ordering(266) 00:11:06.104 fused_ordering(267) 00:11:06.104 fused_ordering(268) 00:11:06.104 fused_ordering(269) 00:11:06.104 fused_ordering(270) 00:11:06.104 fused_ordering(271) 00:11:06.104 fused_ordering(272) 00:11:06.104 fused_ordering(273) 00:11:06.104 fused_ordering(274) 00:11:06.104 fused_ordering(275) 00:11:06.104 fused_ordering(276) 00:11:06.104 fused_ordering(277) 00:11:06.104 fused_ordering(278) 00:11:06.104 fused_ordering(279) 00:11:06.104 fused_ordering(280) 00:11:06.104 fused_ordering(281) 00:11:06.104 fused_ordering(282) 00:11:06.104 fused_ordering(283) 00:11:06.104 fused_ordering(284) 00:11:06.104 fused_ordering(285) 00:11:06.104 fused_ordering(286) 00:11:06.104 fused_ordering(287) 00:11:06.104 fused_ordering(288) 00:11:06.104 fused_ordering(289) 00:11:06.104 fused_ordering(290) 00:11:06.104 fused_ordering(291) 00:11:06.104 fused_ordering(292) 00:11:06.104 fused_ordering(293) 00:11:06.104 fused_ordering(294) 00:11:06.104 fused_ordering(295) 00:11:06.104 fused_ordering(296) 00:11:06.104 fused_ordering(297) 00:11:06.104 fused_ordering(298) 00:11:06.104 fused_ordering(299) 00:11:06.104 fused_ordering(300) 00:11:06.104 fused_ordering(301) 00:11:06.104 fused_ordering(302) 00:11:06.104 fused_ordering(303) 00:11:06.104 fused_ordering(304) 00:11:06.104 fused_ordering(305) 00:11:06.104 fused_ordering(306) 00:11:06.104 fused_ordering(307) 00:11:06.104 fused_ordering(308) 00:11:06.104 fused_ordering(309) 00:11:06.104 fused_ordering(310) 00:11:06.104 fused_ordering(311) 00:11:06.104 fused_ordering(312) 00:11:06.104 fused_ordering(313) 00:11:06.104 fused_ordering(314) 00:11:06.104 fused_ordering(315) 00:11:06.104 fused_ordering(316) 00:11:06.104 fused_ordering(317) 00:11:06.104 fused_ordering(318) 00:11:06.104 fused_ordering(319) 00:11:06.104 fused_ordering(320) 00:11:06.104 fused_ordering(321) 00:11:06.104 fused_ordering(322) 00:11:06.104 fused_ordering(323) 00:11:06.104 fused_ordering(324) 00:11:06.104 fused_ordering(325) 00:11:06.104 fused_ordering(326) 00:11:06.104 fused_ordering(327) 00:11:06.104 fused_ordering(328) 00:11:06.104 fused_ordering(329) 00:11:06.104 fused_ordering(330) 00:11:06.104 fused_ordering(331) 00:11:06.104 fused_ordering(332) 00:11:06.104 fused_ordering(333) 00:11:06.104 fused_ordering(334) 00:11:06.104 fused_ordering(335) 00:11:06.104 fused_ordering(336) 00:11:06.104 fused_ordering(337) 00:11:06.104 fused_ordering(338) 00:11:06.104 fused_ordering(339) 00:11:06.104 fused_ordering(340) 00:11:06.104 fused_ordering(341) 00:11:06.104 fused_ordering(342) 00:11:06.104 fused_ordering(343) 00:11:06.104 fused_ordering(344) 00:11:06.104 fused_ordering(345) 00:11:06.104 fused_ordering(346) 00:11:06.104 fused_ordering(347) 00:11:06.104 fused_ordering(348) 00:11:06.104 fused_ordering(349) 00:11:06.104 fused_ordering(350) 00:11:06.104 fused_ordering(351) 00:11:06.104 fused_ordering(352) 00:11:06.104 fused_ordering(353) 00:11:06.104 fused_ordering(354) 00:11:06.104 fused_ordering(355) 00:11:06.104 fused_ordering(356) 00:11:06.104 fused_ordering(357) 00:11:06.104 fused_ordering(358) 00:11:06.104 fused_ordering(359) 00:11:06.104 fused_ordering(360) 00:11:06.104 fused_ordering(361) 00:11:06.104 fused_ordering(362) 00:11:06.104 fused_ordering(363) 00:11:06.104 fused_ordering(364) 00:11:06.104 fused_ordering(365) 00:11:06.104 fused_ordering(366) 00:11:06.104 fused_ordering(367) 00:11:06.104 fused_ordering(368) 00:11:06.104 fused_ordering(369) 00:11:06.104 fused_ordering(370) 00:11:06.104 fused_ordering(371) 00:11:06.104 fused_ordering(372) 00:11:06.104 fused_ordering(373) 00:11:06.104 fused_ordering(374) 00:11:06.104 fused_ordering(375) 00:11:06.104 fused_ordering(376) 00:11:06.104 fused_ordering(377) 00:11:06.104 fused_ordering(378) 00:11:06.104 fused_ordering(379) 00:11:06.104 fused_ordering(380) 00:11:06.104 fused_ordering(381) 00:11:06.104 fused_ordering(382) 00:11:06.104 fused_ordering(383) 00:11:06.104 fused_ordering(384) 00:11:06.104 fused_ordering(385) 00:11:06.104 fused_ordering(386) 00:11:06.104 fused_ordering(387) 00:11:06.104 fused_ordering(388) 00:11:06.105 fused_ordering(389) 00:11:06.105 fused_ordering(390) 00:11:06.105 fused_ordering(391) 00:11:06.105 fused_ordering(392) 00:11:06.105 fused_ordering(393) 00:11:06.105 fused_ordering(394) 00:11:06.105 fused_ordering(395) 00:11:06.105 fused_ordering(396) 00:11:06.105 fused_ordering(397) 00:11:06.105 fused_ordering(398) 00:11:06.105 fused_ordering(399) 00:11:06.105 fused_ordering(400) 00:11:06.105 fused_ordering(401) 00:11:06.105 fused_ordering(402) 00:11:06.105 fused_ordering(403) 00:11:06.105 fused_ordering(404) 00:11:06.105 fused_ordering(405) 00:11:06.105 fused_ordering(406) 00:11:06.105 fused_ordering(407) 00:11:06.105 fused_ordering(408) 00:11:06.105 fused_ordering(409) 00:11:06.105 fused_ordering(410) 00:11:07.040 fused_ordering(411) 00:11:07.040 fused_ordering(412) 00:11:07.040 fused_ordering(413) 00:11:07.040 fused_ordering(414) 00:11:07.040 fused_ordering(415) 00:11:07.040 fused_ordering(416) 00:11:07.040 fused_ordering(417) 00:11:07.040 fused_ordering(418) 00:11:07.040 fused_ordering(419) 00:11:07.040 fused_ordering(420) 00:11:07.040 fused_ordering(421) 00:11:07.040 fused_ordering(422) 00:11:07.040 fused_ordering(423) 00:11:07.040 fused_ordering(424) 00:11:07.040 fused_ordering(425) 00:11:07.040 fused_ordering(426) 00:11:07.040 fused_ordering(427) 00:11:07.040 fused_ordering(428) 00:11:07.040 fused_ordering(429) 00:11:07.040 fused_ordering(430) 00:11:07.040 fused_ordering(431) 00:11:07.041 fused_ordering(432) 00:11:07.041 fused_ordering(433) 00:11:07.041 fused_ordering(434) 00:11:07.041 fused_ordering(435) 00:11:07.041 fused_ordering(436) 00:11:07.041 fused_ordering(437) 00:11:07.041 fused_ordering(438) 00:11:07.041 fused_ordering(439) 00:11:07.041 fused_ordering(440) 00:11:07.041 fused_ordering(441) 00:11:07.041 fused_ordering(442) 00:11:07.041 fused_ordering(443) 00:11:07.041 fused_ordering(444) 00:11:07.041 fused_ordering(445) 00:11:07.041 fused_ordering(446) 00:11:07.041 fused_ordering(447) 00:11:07.041 fused_ordering(448) 00:11:07.041 fused_ordering(449) 00:11:07.041 fused_ordering(450) 00:11:07.041 fused_ordering(451) 00:11:07.041 fused_ordering(452) 00:11:07.041 fused_ordering(453) 00:11:07.041 fused_ordering(454) 00:11:07.041 fused_ordering(455) 00:11:07.041 fused_ordering(456) 00:11:07.041 fused_ordering(457) 00:11:07.041 fused_ordering(458) 00:11:07.041 fused_ordering(459) 00:11:07.041 fused_ordering(460) 00:11:07.041 fused_ordering(461) 00:11:07.041 fused_ordering(462) 00:11:07.041 fused_ordering(463) 00:11:07.041 fused_ordering(464) 00:11:07.041 fused_ordering(465) 00:11:07.041 fused_ordering(466) 00:11:07.041 fused_ordering(467) 00:11:07.041 fused_ordering(468) 00:11:07.041 fused_ordering(469) 00:11:07.041 fused_ordering(470) 00:11:07.041 fused_ordering(471) 00:11:07.041 fused_ordering(472) 00:11:07.041 fused_ordering(473) 00:11:07.041 fused_ordering(474) 00:11:07.041 fused_ordering(475) 00:11:07.041 fused_ordering(476) 00:11:07.041 fused_ordering(477) 00:11:07.041 fused_ordering(478) 00:11:07.041 fused_ordering(479) 00:11:07.041 fused_ordering(480) 00:11:07.041 fused_ordering(481) 00:11:07.041 fused_ordering(482) 00:11:07.041 fused_ordering(483) 00:11:07.041 fused_ordering(484) 00:11:07.041 fused_ordering(485) 00:11:07.041 fused_ordering(486) 00:11:07.041 fused_ordering(487) 00:11:07.041 fused_ordering(488) 00:11:07.041 fused_ordering(489) 00:11:07.041 fused_ordering(490) 00:11:07.041 fused_ordering(491) 00:11:07.041 fused_ordering(492) 00:11:07.041 fused_ordering(493) 00:11:07.041 fused_ordering(494) 00:11:07.041 fused_ordering(495) 00:11:07.041 fused_ordering(496) 00:11:07.041 fused_ordering(497) 00:11:07.041 fused_ordering(498) 00:11:07.041 fused_ordering(499) 00:11:07.041 fused_ordering(500) 00:11:07.041 fused_ordering(501) 00:11:07.041 fused_ordering(502) 00:11:07.041 fused_ordering(503) 00:11:07.041 fused_ordering(504) 00:11:07.041 fused_ordering(505) 00:11:07.041 fused_ordering(506) 00:11:07.041 fused_ordering(507) 00:11:07.041 fused_ordering(508) 00:11:07.041 fused_ordering(509) 00:11:07.041 fused_ordering(510) 00:11:07.041 fused_ordering(511) 00:11:07.041 fused_ordering(512) 00:11:07.041 fused_ordering(513) 00:11:07.041 fused_ordering(514) 00:11:07.041 fused_ordering(515) 00:11:07.041 fused_ordering(516) 00:11:07.041 fused_ordering(517) 00:11:07.041 fused_ordering(518) 00:11:07.041 fused_ordering(519) 00:11:07.041 fused_ordering(520) 00:11:07.041 fused_ordering(521) 00:11:07.041 fused_ordering(522) 00:11:07.041 fused_ordering(523) 00:11:07.041 fused_ordering(524) 00:11:07.041 fused_ordering(525) 00:11:07.041 fused_ordering(526) 00:11:07.041 fused_ordering(527) 00:11:07.041 fused_ordering(528) 00:11:07.041 fused_ordering(529) 00:11:07.041 fused_ordering(530) 00:11:07.041 fused_ordering(531) 00:11:07.041 fused_ordering(532) 00:11:07.041 fused_ordering(533) 00:11:07.041 fused_ordering(534) 00:11:07.041 fused_ordering(535) 00:11:07.041 fused_ordering(536) 00:11:07.041 fused_ordering(537) 00:11:07.041 fused_ordering(538) 00:11:07.041 fused_ordering(539) 00:11:07.041 fused_ordering(540) 00:11:07.041 fused_ordering(541) 00:11:07.041 fused_ordering(542) 00:11:07.041 fused_ordering(543) 00:11:07.041 fused_ordering(544) 00:11:07.041 fused_ordering(545) 00:11:07.041 fused_ordering(546) 00:11:07.041 fused_ordering(547) 00:11:07.041 fused_ordering(548) 00:11:07.041 fused_ordering(549) 00:11:07.041 fused_ordering(550) 00:11:07.041 fused_ordering(551) 00:11:07.041 fused_ordering(552) 00:11:07.041 fused_ordering(553) 00:11:07.041 fused_ordering(554) 00:11:07.041 fused_ordering(555) 00:11:07.041 fused_ordering(556) 00:11:07.041 fused_ordering(557) 00:11:07.041 fused_ordering(558) 00:11:07.041 fused_ordering(559) 00:11:07.041 fused_ordering(560) 00:11:07.041 fused_ordering(561) 00:11:07.041 fused_ordering(562) 00:11:07.041 fused_ordering(563) 00:11:07.041 fused_ordering(564) 00:11:07.041 fused_ordering(565) 00:11:07.041 fused_ordering(566) 00:11:07.041 fused_ordering(567) 00:11:07.041 fused_ordering(568) 00:11:07.041 fused_ordering(569) 00:11:07.041 fused_ordering(570) 00:11:07.041 fused_ordering(571) 00:11:07.041 fused_ordering(572) 00:11:07.041 fused_ordering(573) 00:11:07.041 fused_ordering(574) 00:11:07.041 fused_ordering(575) 00:11:07.041 fused_ordering(576) 00:11:07.041 fused_ordering(577) 00:11:07.041 fused_ordering(578) 00:11:07.041 fused_ordering(579) 00:11:07.041 fused_ordering(580) 00:11:07.041 fused_ordering(581) 00:11:07.041 fused_ordering(582) 00:11:07.041 fused_ordering(583) 00:11:07.041 fused_ordering(584) 00:11:07.041 fused_ordering(585) 00:11:07.041 fused_ordering(586) 00:11:07.041 fused_ordering(587) 00:11:07.041 fused_ordering(588) 00:11:07.041 fused_ordering(589) 00:11:07.041 fused_ordering(590) 00:11:07.041 fused_ordering(591) 00:11:07.041 fused_ordering(592) 00:11:07.041 fused_ordering(593) 00:11:07.041 fused_ordering(594) 00:11:07.041 fused_ordering(595) 00:11:07.041 fused_ordering(596) 00:11:07.041 fused_ordering(597) 00:11:07.041 fused_ordering(598) 00:11:07.041 fused_ordering(599) 00:11:07.041 fused_ordering(600) 00:11:07.041 fused_ordering(601) 00:11:07.041 fused_ordering(602) 00:11:07.041 fused_ordering(603) 00:11:07.041 fused_ordering(604) 00:11:07.041 fused_ordering(605) 00:11:07.041 fused_ordering(606) 00:11:07.041 fused_ordering(607) 00:11:07.041 fused_ordering(608) 00:11:07.041 fused_ordering(609) 00:11:07.041 fused_ordering(610) 00:11:07.041 fused_ordering(611) 00:11:07.041 fused_ordering(612) 00:11:07.041 fused_ordering(613) 00:11:07.041 fused_ordering(614) 00:11:07.041 fused_ordering(615) 00:11:07.609 fused_ordering(616) 00:11:07.609 fused_ordering(617) 00:11:07.609 fused_ordering(618) 00:11:07.609 fused_ordering(619) 00:11:07.609 fused_ordering(620) 00:11:07.609 fused_ordering(621) 00:11:07.609 fused_ordering(622) 00:11:07.609 fused_ordering(623) 00:11:07.609 fused_ordering(624) 00:11:07.609 fused_ordering(625) 00:11:07.609 fused_ordering(626) 00:11:07.609 fused_ordering(627) 00:11:07.609 fused_ordering(628) 00:11:07.609 fused_ordering(629) 00:11:07.609 fused_ordering(630) 00:11:07.609 fused_ordering(631) 00:11:07.609 fused_ordering(632) 00:11:07.609 fused_ordering(633) 00:11:07.609 fused_ordering(634) 00:11:07.609 fused_ordering(635) 00:11:07.609 fused_ordering(636) 00:11:07.609 fused_ordering(637) 00:11:07.609 fused_ordering(638) 00:11:07.609 fused_ordering(639) 00:11:07.609 fused_ordering(640) 00:11:07.609 fused_ordering(641) 00:11:07.609 fused_ordering(642) 00:11:07.609 fused_ordering(643) 00:11:07.609 fused_ordering(644) 00:11:07.609 fused_ordering(645) 00:11:07.609 fused_ordering(646) 00:11:07.609 fused_ordering(647) 00:11:07.609 fused_ordering(648) 00:11:07.609 fused_ordering(649) 00:11:07.609 fused_ordering(650) 00:11:07.609 fused_ordering(651) 00:11:07.609 fused_ordering(652) 00:11:07.609 fused_ordering(653) 00:11:07.609 fused_ordering(654) 00:11:07.609 fused_ordering(655) 00:11:07.609 fused_ordering(656) 00:11:07.609 fused_ordering(657) 00:11:07.609 fused_ordering(658) 00:11:07.609 fused_ordering(659) 00:11:07.609 fused_ordering(660) 00:11:07.609 fused_ordering(661) 00:11:07.609 fused_ordering(662) 00:11:07.609 fused_ordering(663) 00:11:07.609 fused_ordering(664) 00:11:07.609 fused_ordering(665) 00:11:07.609 fused_ordering(666) 00:11:07.609 fused_ordering(667) 00:11:07.609 fused_ordering(668) 00:11:07.609 fused_ordering(669) 00:11:07.609 fused_ordering(670) 00:11:07.609 fused_ordering(671) 00:11:07.609 fused_ordering(672) 00:11:07.609 fused_ordering(673) 00:11:07.609 fused_ordering(674) 00:11:07.609 fused_ordering(675) 00:11:07.609 fused_ordering(676) 00:11:07.609 fused_ordering(677) 00:11:07.609 fused_ordering(678) 00:11:07.609 fused_ordering(679) 00:11:07.609 fused_ordering(680) 00:11:07.609 fused_ordering(681) 00:11:07.609 fused_ordering(682) 00:11:07.609 fused_ordering(683) 00:11:07.609 fused_ordering(684) 00:11:07.609 fused_ordering(685) 00:11:07.609 fused_ordering(686) 00:11:07.609 fused_ordering(687) 00:11:07.609 fused_ordering(688) 00:11:07.609 fused_ordering(689) 00:11:07.609 fused_ordering(690) 00:11:07.609 fused_ordering(691) 00:11:07.609 fused_ordering(692) 00:11:07.609 fused_ordering(693) 00:11:07.609 fused_ordering(694) 00:11:07.609 fused_ordering(695) 00:11:07.609 fused_ordering(696) 00:11:07.609 fused_ordering(697) 00:11:07.609 fused_ordering(698) 00:11:07.609 fused_ordering(699) 00:11:07.609 fused_ordering(700) 00:11:07.609 fused_ordering(701) 00:11:07.609 fused_ordering(702) 00:11:07.609 fused_ordering(703) 00:11:07.609 fused_ordering(704) 00:11:07.609 fused_ordering(705) 00:11:07.609 fused_ordering(706) 00:11:07.609 fused_ordering(707) 00:11:07.609 fused_ordering(708) 00:11:07.609 fused_ordering(709) 00:11:07.609 fused_ordering(710) 00:11:07.609 fused_ordering(711) 00:11:07.609 fused_ordering(712) 00:11:07.609 fused_ordering(713) 00:11:07.609 fused_ordering(714) 00:11:07.609 fused_ordering(715) 00:11:07.609 fused_ordering(716) 00:11:07.609 fused_ordering(717) 00:11:07.609 fused_ordering(718) 00:11:07.609 fused_ordering(719) 00:11:07.609 fused_ordering(720) 00:11:07.609 fused_ordering(721) 00:11:07.609 fused_ordering(722) 00:11:07.609 fused_ordering(723) 00:11:07.609 fused_ordering(724) 00:11:07.609 fused_ordering(725) 00:11:07.609 fused_ordering(726) 00:11:07.609 fused_ordering(727) 00:11:07.609 fused_ordering(728) 00:11:07.609 fused_ordering(729) 00:11:07.609 fused_ordering(730) 00:11:07.609 fused_ordering(731) 00:11:07.609 fused_ordering(732) 00:11:07.609 fused_ordering(733) 00:11:07.609 fused_ordering(734) 00:11:07.609 fused_ordering(735) 00:11:07.609 fused_ordering(736) 00:11:07.609 fused_ordering(737) 00:11:07.609 fused_ordering(738) 00:11:07.609 fused_ordering(739) 00:11:07.609 fused_ordering(740) 00:11:07.609 fused_ordering(741) 00:11:07.609 fused_ordering(742) 00:11:07.609 fused_ordering(743) 00:11:07.609 fused_ordering(744) 00:11:07.609 fused_ordering(745) 00:11:07.609 fused_ordering(746) 00:11:07.609 fused_ordering(747) 00:11:07.609 fused_ordering(748) 00:11:07.609 fused_ordering(749) 00:11:07.609 fused_ordering(750) 00:11:07.609 fused_ordering(751) 00:11:07.609 fused_ordering(752) 00:11:07.609 fused_ordering(753) 00:11:07.609 fused_ordering(754) 00:11:07.609 fused_ordering(755) 00:11:07.609 fused_ordering(756) 00:11:07.609 fused_ordering(757) 00:11:07.609 fused_ordering(758) 00:11:07.609 fused_ordering(759) 00:11:07.609 fused_ordering(760) 00:11:07.609 fused_ordering(761) 00:11:07.609 fused_ordering(762) 00:11:07.609 fused_ordering(763) 00:11:07.609 fused_ordering(764) 00:11:07.609 fused_ordering(765) 00:11:07.609 fused_ordering(766) 00:11:07.609 fused_ordering(767) 00:11:07.609 fused_ordering(768) 00:11:07.609 fused_ordering(769) 00:11:07.609 fused_ordering(770) 00:11:07.609 fused_ordering(771) 00:11:07.609 fused_ordering(772) 00:11:07.609 fused_ordering(773) 00:11:07.609 fused_ordering(774) 00:11:07.609 fused_ordering(775) 00:11:07.609 fused_ordering(776) 00:11:07.609 fused_ordering(777) 00:11:07.609 fused_ordering(778) 00:11:07.609 fused_ordering(779) 00:11:07.609 fused_ordering(780) 00:11:07.609 fused_ordering(781) 00:11:07.609 fused_ordering(782) 00:11:07.609 fused_ordering(783) 00:11:07.609 fused_ordering(784) 00:11:07.609 fused_ordering(785) 00:11:07.609 fused_ordering(786) 00:11:07.609 fused_ordering(787) 00:11:07.609 fused_ordering(788) 00:11:07.609 fused_ordering(789) 00:11:07.609 fused_ordering(790) 00:11:07.609 fused_ordering(791) 00:11:07.609 fused_ordering(792) 00:11:07.609 fused_ordering(793) 00:11:07.609 fused_ordering(794) 00:11:07.609 fused_ordering(795) 00:11:07.609 fused_ordering(796) 00:11:07.609 fused_ordering(797) 00:11:07.609 fused_ordering(798) 00:11:07.609 fused_ordering(799) 00:11:07.609 fused_ordering(800) 00:11:07.609 fused_ordering(801) 00:11:07.609 fused_ordering(802) 00:11:07.609 fused_ordering(803) 00:11:07.609 fused_ordering(804) 00:11:07.609 fused_ordering(805) 00:11:07.609 fused_ordering(806) 00:11:07.609 fused_ordering(807) 00:11:07.609 fused_ordering(808) 00:11:07.609 fused_ordering(809) 00:11:07.609 fused_ordering(810) 00:11:07.609 fused_ordering(811) 00:11:07.609 fused_ordering(812) 00:11:07.609 fused_ordering(813) 00:11:07.609 fused_ordering(814) 00:11:07.609 fused_ordering(815) 00:11:07.609 fused_ordering(816) 00:11:07.609 fused_ordering(817) 00:11:07.609 fused_ordering(818) 00:11:07.609 fused_ordering(819) 00:11:07.609 fused_ordering(820) 00:11:08.546 fused_ordering(821) 00:11:08.546 fused_ordering(822) 00:11:08.546 fused_ordering(823) 00:11:08.546 fused_ordering(824) 00:11:08.546 fused_ordering(825) 00:11:08.546 fused_ordering(826) 00:11:08.546 fused_ordering(827) 00:11:08.546 fused_ordering(828) 00:11:08.546 fused_ordering(829) 00:11:08.546 fused_ordering(830) 00:11:08.546 fused_ordering(831) 00:11:08.546 fused_ordering(832) 00:11:08.546 fused_ordering(833) 00:11:08.546 fused_ordering(834) 00:11:08.546 fused_ordering(835) 00:11:08.546 fused_ordering(836) 00:11:08.546 fused_ordering(837) 00:11:08.546 fused_ordering(838) 00:11:08.546 fused_ordering(839) 00:11:08.546 fused_ordering(840) 00:11:08.546 fused_ordering(841) 00:11:08.546 fused_ordering(842) 00:11:08.546 fused_ordering(843) 00:11:08.546 fused_ordering(844) 00:11:08.546 fused_ordering(845) 00:11:08.546 fused_ordering(846) 00:11:08.546 fused_ordering(847) 00:11:08.546 fused_ordering(848) 00:11:08.546 fused_ordering(849) 00:11:08.546 fused_ordering(850) 00:11:08.546 fused_ordering(851) 00:11:08.546 fused_ordering(852) 00:11:08.546 fused_ordering(853) 00:11:08.546 fused_ordering(854) 00:11:08.546 fused_ordering(855) 00:11:08.546 fused_ordering(856) 00:11:08.546 fused_ordering(857) 00:11:08.546 fused_ordering(858) 00:11:08.546 fused_ordering(859) 00:11:08.546 fused_ordering(860) 00:11:08.546 fused_ordering(861) 00:11:08.546 fused_ordering(862) 00:11:08.546 fused_ordering(863) 00:11:08.546 fused_ordering(864) 00:11:08.546 fused_ordering(865) 00:11:08.546 fused_ordering(866) 00:11:08.546 fused_ordering(867) 00:11:08.546 fused_ordering(868) 00:11:08.546 fused_ordering(869) 00:11:08.546 fused_ordering(870) 00:11:08.546 fused_ordering(871) 00:11:08.546 fused_ordering(872) 00:11:08.546 fused_ordering(873) 00:11:08.546 fused_ordering(874) 00:11:08.546 fused_ordering(875) 00:11:08.546 fused_ordering(876) 00:11:08.546 fused_ordering(877) 00:11:08.546 fused_ordering(878) 00:11:08.546 fused_ordering(879) 00:11:08.546 fused_ordering(880) 00:11:08.546 fused_ordering(881) 00:11:08.546 fused_ordering(882) 00:11:08.546 fused_ordering(883) 00:11:08.546 fused_ordering(884) 00:11:08.546 fused_ordering(885) 00:11:08.546 fused_ordering(886) 00:11:08.546 fused_ordering(887) 00:11:08.546 fused_ordering(888) 00:11:08.546 fused_ordering(889) 00:11:08.546 fused_ordering(890) 00:11:08.546 fused_ordering(891) 00:11:08.546 fused_ordering(892) 00:11:08.546 fused_ordering(893) 00:11:08.546 fused_ordering(894) 00:11:08.546 fused_ordering(895) 00:11:08.546 fused_ordering(896) 00:11:08.546 fused_ordering(897) 00:11:08.546 fused_ordering(898) 00:11:08.546 fused_ordering(899) 00:11:08.546 fused_ordering(900) 00:11:08.546 fused_ordering(901) 00:11:08.546 fused_ordering(902) 00:11:08.546 fused_ordering(903) 00:11:08.546 fused_ordering(904) 00:11:08.546 fused_ordering(905) 00:11:08.546 fused_ordering(906) 00:11:08.546 fused_ordering(907) 00:11:08.546 fused_ordering(908) 00:11:08.546 fused_ordering(909) 00:11:08.546 fused_ordering(910) 00:11:08.546 fused_ordering(911) 00:11:08.546 fused_ordering(912) 00:11:08.546 fused_ordering(913) 00:11:08.546 fused_ordering(914) 00:11:08.546 fused_ordering(915) 00:11:08.546 fused_ordering(916) 00:11:08.546 fused_ordering(917) 00:11:08.546 fused_ordering(918) 00:11:08.546 fused_ordering(919) 00:11:08.546 fused_ordering(920) 00:11:08.546 fused_ordering(921) 00:11:08.546 fused_ordering(922) 00:11:08.546 fused_ordering(923) 00:11:08.546 fused_ordering(924) 00:11:08.546 fused_ordering(925) 00:11:08.546 fused_ordering(926) 00:11:08.546 fused_ordering(927) 00:11:08.546 fused_ordering(928) 00:11:08.546 fused_ordering(929) 00:11:08.546 fused_ordering(930) 00:11:08.546 fused_ordering(931) 00:11:08.546 fused_ordering(932) 00:11:08.546 fused_ordering(933) 00:11:08.546 fused_ordering(934) 00:11:08.546 fused_ordering(935) 00:11:08.546 fused_ordering(936) 00:11:08.546 fused_ordering(937) 00:11:08.546 fused_ordering(938) 00:11:08.546 fused_ordering(939) 00:11:08.546 fused_ordering(940) 00:11:08.546 fused_ordering(941) 00:11:08.546 fused_ordering(942) 00:11:08.546 fused_ordering(943) 00:11:08.546 fused_ordering(944) 00:11:08.546 fused_ordering(945) 00:11:08.546 fused_ordering(946) 00:11:08.546 fused_ordering(947) 00:11:08.546 fused_ordering(948) 00:11:08.546 fused_ordering(949) 00:11:08.546 fused_ordering(950) 00:11:08.546 fused_ordering(951) 00:11:08.546 fused_ordering(952) 00:11:08.546 fused_ordering(953) 00:11:08.546 fused_ordering(954) 00:11:08.546 fused_ordering(955) 00:11:08.546 fused_ordering(956) 00:11:08.546 fused_ordering(957) 00:11:08.546 fused_ordering(958) 00:11:08.546 fused_ordering(959) 00:11:08.546 fused_ordering(960) 00:11:08.546 fused_ordering(961) 00:11:08.546 fused_ordering(962) 00:11:08.546 fused_ordering(963) 00:11:08.546 fused_ordering(964) 00:11:08.546 fused_ordering(965) 00:11:08.546 fused_ordering(966) 00:11:08.546 fused_ordering(967) 00:11:08.546 fused_ordering(968) 00:11:08.546 fused_ordering(969) 00:11:08.546 fused_ordering(970) 00:11:08.546 fused_ordering(971) 00:11:08.546 fused_ordering(972) 00:11:08.546 fused_ordering(973) 00:11:08.546 fused_ordering(974) 00:11:08.546 fused_ordering(975) 00:11:08.546 fused_ordering(976) 00:11:08.546 fused_ordering(977) 00:11:08.546 fused_ordering(978) 00:11:08.546 fused_ordering(979) 00:11:08.546 fused_ordering(980) 00:11:08.546 fused_ordering(981) 00:11:08.546 fused_ordering(982) 00:11:08.546 fused_ordering(983) 00:11:08.546 fused_ordering(984) 00:11:08.546 fused_ordering(985) 00:11:08.546 fused_ordering(986) 00:11:08.546 fused_ordering(987) 00:11:08.546 fused_ordering(988) 00:11:08.546 fused_ordering(989) 00:11:08.546 fused_ordering(990) 00:11:08.546 fused_ordering(991) 00:11:08.546 fused_ordering(992) 00:11:08.546 fused_ordering(993) 00:11:08.546 fused_ordering(994) 00:11:08.546 fused_ordering(995) 00:11:08.546 fused_ordering(996) 00:11:08.546 fused_ordering(997) 00:11:08.546 fused_ordering(998) 00:11:08.546 fused_ordering(999) 00:11:08.546 fused_ordering(1000) 00:11:08.546 fused_ordering(1001) 00:11:08.546 fused_ordering(1002) 00:11:08.546 fused_ordering(1003) 00:11:08.546 fused_ordering(1004) 00:11:08.546 fused_ordering(1005) 00:11:08.546 fused_ordering(1006) 00:11:08.546 fused_ordering(1007) 00:11:08.546 fused_ordering(1008) 00:11:08.546 fused_ordering(1009) 00:11:08.546 fused_ordering(1010) 00:11:08.546 fused_ordering(1011) 00:11:08.546 fused_ordering(1012) 00:11:08.546 fused_ordering(1013) 00:11:08.546 fused_ordering(1014) 00:11:08.546 fused_ordering(1015) 00:11:08.546 fused_ordering(1016) 00:11:08.546 fused_ordering(1017) 00:11:08.546 fused_ordering(1018) 00:11:08.546 fused_ordering(1019) 00:11:08.546 fused_ordering(1020) 00:11:08.546 fused_ordering(1021) 00:11:08.546 fused_ordering(1022) 00:11:08.546 fused_ordering(1023) 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.546 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.546 rmmod nvme_tcp 00:11:08.546 rmmod nvme_fabrics 00:11:08.546 rmmod nvme_keyring 00:11:08.547 01:13:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 4007232 ']' 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 4007232 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 4007232 ']' 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 4007232 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4007232 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4007232' 00:11:08.547 killing process with pid 4007232 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 4007232 00:11:08.547 [2024-05-15 01:13:44.066552] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:08.547 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 4007232 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.806 01:13:44 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.711 01:13:46 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.711 00:11:10.711 real 0m13.715s 00:11:10.711 user 0m7.743s 00:11:10.711 sys 0m7.914s 00:11:10.711 01:13:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:10.711 01:13:46 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:10.711 ************************************ 00:11:10.711 END TEST nvmf_fused_ordering 00:11:10.711 ************************************ 00:11:10.711 01:13:46 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.711 01:13:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:10.711 01:13:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:10.711 01:13:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.970 ************************************ 00:11:10.970 START TEST nvmf_delete_subsystem 00:11:10.970 ************************************ 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:10.970 * Looking for test storage... 00:11:10.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.970 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.971 01:13:46 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:17.540 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:17.540 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:17.540 Found net devices under 0000:af:00.0: cvl_0_0 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:17.540 Found net devices under 0000:af:00.1: cvl_0_1 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:17.540 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.541 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.541 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:17.541 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:17.541 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.541 01:13:52 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.541 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.541 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.541 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:17.541 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.541 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.541 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:17.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:11:17.798 00:11:17.798 --- 10.0.0.2 ping statistics --- 00:11:17.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.798 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:11:17.798 00:11:17.798 --- 10.0.0.1 ping statistics --- 00:11:17.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.798 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=4011707 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 4011707 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 4011707 ']' 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:17.798 01:13:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.798 [2024-05-15 01:13:53.351871] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:11:17.798 [2024-05-15 01:13:53.351923] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.798 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.798 [2024-05-15 01:13:53.425994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:18.055 [2024-05-15 01:13:53.499592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.055 [2024-05-15 01:13:53.499626] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.055 [2024-05-15 01:13:53.499635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.055 [2024-05-15 01:13:53.499644] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.055 [2024-05-15 01:13:53.499651] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.055 [2024-05-15 01:13:53.499689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.055 [2024-05-15 01:13:53.499693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 [2024-05-15 01:13:54.210841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 [2024-05-15 01:13:54.234853] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:18.620 [2024-05-15 01:13:54.235053] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 NULL1 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 Delay0 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=4011781 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:18.620 01:13:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:18.620 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.879 [2024-05-15 01:13:54.322689] subsystem.c:1557:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:20.797 01:13:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.797 01:13:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.797 01:13:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Write completed with error (sct=0, sc=8) 00:11:20.797 starting I/O failed: -6 00:11:20.797 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 [2024-05-15 01:13:56.412616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4980 is same with the state(5) to be set 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Read completed with error (sct=0, sc=8) 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 Write completed with error (sct=0, sc=8) 00:11:20.798 starting I/O failed: -6 00:11:20.798 [2024-05-15 01:13:56.413422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea0000c00 is same with the state(5) to be set 00:11:21.732 [2024-05-15 01:13:57.379223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7420 is same with the state(5) to be set 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 [2024-05-15 01:13:57.414462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6c40 is same with the state(5) to be set 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 [2024-05-15 01:13:57.414631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a6e20 is same with the state(5) to be set 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 [2024-05-15 01:13:57.414839] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fdea000c2f0 is same with the state(5) to be set 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Write completed with error (sct=0, sc=8) 00:11:21.732 Read completed with error (sct=0, sc=8) 00:11:21.732 [2024-05-15 01:13:57.415004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4b60 is same with the state(5) to be set 00:11:21.732 Initializing NVMe Controllers 00:11:21.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.732 Controller IO queue size 128, less than required. 00:11:21.732 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:21.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:21.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:21.732 Initialization complete. Launching workers. 00:11:21.732 ======================================================== 00:11:21.732 Latency(us) 00:11:21.732 Device Information : IOPS MiB/s Average min max 00:11:21.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.73 0.09 951387.78 1188.12 1011036.12 00:11:21.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 178.28 0.09 846124.98 445.19 1010753.66 00:11:21.732 ======================================================== 00:11:21.732 Total : 365.01 0.18 899973.71 445.19 1011036.12 00:11:21.732 00:11:21.732 [2024-05-15 01:13:57.415780] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a7420 (9): Bad file descriptor 00:11:21.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:21.732 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.732 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:21.732 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4011781 00:11:21.732 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 4011781 00:11:22.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (4011781) - No such process 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 4011781 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 4011781 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 4011781 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.298 [2024-05-15 01:13:57.952090] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=4012526 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:22.298 01:13:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.560 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.560 [2024-05-15 01:13:58.015810] subsystem.c:1557:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:22.819 01:13:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.819 01:13:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:22.819 01:13:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.384 01:13:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.384 01:13:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:23.384 01:13:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.950 01:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.950 01:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:23.950 01:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.516 01:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.516 01:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:24.516 01:13:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.082 01:14:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.082 01:14:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:25.082 01:14:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.340 01:14:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.340 01:14:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:25.340 01:14:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.598 Initializing NVMe Controllers 00:11:25.598 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.598 Controller IO queue size 128, less than required. 00:11:25.598 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:25.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:25.598 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:25.598 Initialization complete. Launching workers. 00:11:25.598 ======================================================== 00:11:25.598 Latency(us) 00:11:25.598 Device Information : IOPS MiB/s Average min max 00:11:25.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003385.23 1000165.38 1041718.08 00:11:25.598 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005015.18 1000339.11 1011885.02 00:11:25.598 ======================================================== 00:11:25.598 Total : 256.00 0.12 1004200.21 1000165.38 1041718.08 00:11:25.598 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 4012526 00:11:25.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (4012526) - No such process 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 4012526 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.856 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:25.857 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.857 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:25.857 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.857 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.857 rmmod nvme_tcp 00:11:25.857 rmmod nvme_fabrics 00:11:25.857 rmmod nvme_keyring 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 4011707 ']' 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 4011707 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 4011707 ']' 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 4011707 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4011707 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4011707' 00:11:26.116 killing process with pid 4011707 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 4011707 00:11:26.116 [2024-05-15 01:14:01.621998] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:26.116 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 4011707 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:26.375 01:14:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.281 01:14:03 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:28.281 00:11:28.281 real 0m17.483s 00:11:28.281 user 0m29.707s 00:11:28.281 sys 0m6.917s 00:11:28.281 01:14:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.281 01:14:03 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.281 ************************************ 00:11:28.281 END TEST nvmf_delete_subsystem 00:11:28.281 ************************************ 00:11:28.281 01:14:03 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.281 01:14:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:28.281 01:14:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.281 01:14:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.541 ************************************ 00:11:28.541 START TEST nvmf_ns_masking 00:11:28.541 ************************************ 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:28.541 * Looking for test storage... 00:11:28.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:28.541 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=02b7cf88-bc9d-4f24-a5d2-4e6728b45903 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.542 01:14:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:35.117 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:35.117 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:35.117 Found net devices under 0000:af:00.0: cvl_0_0 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:35.117 Found net devices under 0000:af:00.1: cvl_0_1 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.117 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.377 01:14:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:11:35.377 00:11:35.377 --- 10.0.0.2 ping statistics --- 00:11:35.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.377 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:35.377 00:11:35.377 --- 10.0.0.1 ping statistics --- 00:11:35.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.377 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=4016812 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 4016812 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 4016812 ']' 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:35.377 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.636 [2024-05-15 01:14:11.108439] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:11:35.636 [2024-05-15 01:14:11.108484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.636 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.636 [2024-05-15 01:14:11.181436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.636 [2024-05-15 01:14:11.256060] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.636 [2024-05-15 01:14:11.256102] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.636 [2024-05-15 01:14:11.256111] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.636 [2024-05-15 01:14:11.256120] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.636 [2024-05-15 01:14:11.256127] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.636 [2024-05-15 01:14:11.256178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.636 [2024-05-15 01:14:11.256276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.636 [2024-05-15 01:14:11.256295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.636 [2024-05-15 01:14:11.256296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.574 01:14:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.574 [2024-05-15 01:14:12.102781] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.574 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:36.574 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:36.574 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:36.833 Malloc1 00:11:36.833 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:36.833 Malloc2 00:11:36.833 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.092 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:37.352 01:14:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.352 [2024-05-15 01:14:12.981281] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:37.352 [2024-05-15 01:14:12.981573] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.352 01:14:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:11:37.352 01:14:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 02b7cf88-bc9d-4f24-a5d2-4e6728b45903 -a 10.0.0.2 -s 4420 -i 4 00:11:37.659 01:14:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.659 01:14:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:37.659 01:14:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.659 01:14:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:37.659 01:14:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.577 [ 0]:0x1 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=399978c713c04aebbfbb0731c957d5f0 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 399978c713c04aebbfbb0731c957d5f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.577 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:39.837 [ 0]:0x1 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=399978c713c04aebbfbb0731c957d5f0 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 399978c713c04aebbfbb0731c957d5f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:39.837 [ 1]:0x2 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:39.837 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:40.096 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:40.096 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.096 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:11:40.096 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.355 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.355 01:14:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:40.614 01:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:11:40.614 01:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 02b7cf88-bc9d-4f24-a5d2-4e6728b45903 -a 10.0.0.2 -s 4420 -i 4 00:11:40.873 01:14:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:40.873 01:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:40.873 01:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.873 01:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:11:40.873 01:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:11:40.873 01:14:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.778 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.038 [ 0]:0x2 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.038 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.297 [ 0]:0x1 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=399978c713c04aebbfbb0731c957d5f0 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 399978c713c04aebbfbb0731c957d5f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.297 [ 1]:0x2 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.297 01:14:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.556 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:43.557 [ 0]:0x2 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:11:43.557 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:43.815 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.815 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:43.815 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:11:43.815 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 02b7cf88-bc9d-4f24-a5d2-4e6728b45903 -a 10.0.0.2 -s 4420 -i 4 00:11:44.074 01:14:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:44.074 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:11:44.074 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:44.074 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:44.074 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:44.074 01:14:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:45.979 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.239 [ 0]:0x1 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=399978c713c04aebbfbb0731c957d5f0 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 399978c713c04aebbfbb0731c957d5f0 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.239 [ 1]:0x2 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.239 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.499 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:46.499 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.499 01:14:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.499 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:46.759 [ 0]:0x2 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:46.759 [2024-05-15 01:14:22.391685] nvmf_rpc.c:1780:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:46.759 request: 00:11:46.759 { 00:11:46.759 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.759 "nsid": 2, 00:11:46.759 "host": "nqn.2016-06.io.spdk:host1", 00:11:46.759 "method": "nvmf_ns_remove_host", 00:11:46.759 "req_id": 1 00:11:46.759 } 00:11:46.759 Got JSON-RPC error response 00:11:46.759 response: 00:11:46.759 { 00:11:46.759 "code": -32602, 00:11:46.759 "message": "Invalid parameters" 00:11:46.759 } 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:46.759 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.018 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.018 [ 0]:0x2 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=8ca1c626eace453a9ab2a5e354c50b32 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 8ca1c626eace453a9ab2a5e354c50b32 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.019 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.278 rmmod nvme_tcp 00:11:47.278 rmmod nvme_fabrics 00:11:47.278 rmmod nvme_keyring 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 4016812 ']' 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 4016812 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 4016812 ']' 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 4016812 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4016812 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4016812' 00:11:47.278 killing process with pid 4016812 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 4016812 00:11:47.278 [2024-05-15 01:14:22.881316] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:47.278 01:14:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 4016812 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.537 01:14:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.073 01:14:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.073 00:11:50.073 real 0m21.202s 00:11:50.073 user 0m51.035s 00:11:50.073 sys 0m7.719s 00:11:50.073 01:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:50.073 01:14:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:50.073 ************************************ 00:11:50.073 END TEST nvmf_ns_masking 00:11:50.073 ************************************ 00:11:50.073 01:14:25 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:50.073 01:14:25 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.073 01:14:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:50.073 01:14:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.073 01:14:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:50.073 ************************************ 00:11:50.073 START TEST nvmf_nvme_cli 00:11:50.073 ************************************ 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:50.073 * Looking for test storage... 00:11:50.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.073 01:14:25 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.074 01:14:25 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:56.650 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:56.650 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:56.650 Found net devices under 0000:af:00.0: cvl_0_0 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:56.650 Found net devices under 0000:af:00.1: cvl_0_1 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:56.650 01:14:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:56.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:11:56.650 00:11:56.650 --- 10.0.0.2 ping statistics --- 00:11:56.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.650 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:56.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:11:56.650 00:11:56.650 --- 10.0.0.1 ping statistics --- 00:11:56.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.650 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.650 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=4022546 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 4022546 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 4022546 ']' 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:56.651 01:14:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.651 [2024-05-15 01:14:32.213183] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:11:56.651 [2024-05-15 01:14:32.213234] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.651 EAL: No free 2048 kB hugepages reported on node 1 00:11:56.651 [2024-05-15 01:14:32.287654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.910 [2024-05-15 01:14:32.362383] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.910 [2024-05-15 01:14:32.362421] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.910 [2024-05-15 01:14:32.362430] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.910 [2024-05-15 01:14:32.362439] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.910 [2024-05-15 01:14:32.362446] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.910 [2024-05-15 01:14:32.362493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.910 [2024-05-15 01:14:32.362612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.910 [2024-05-15 01:14:32.362696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.910 [2024-05-15 01:14:32.362698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 [2024-05-15 01:14:33.060030] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 Malloc0 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 Malloc1 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 [2024-05-15 01:14:33.143829] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:57.515 [2024-05-15 01:14:33.144103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.515 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:11:57.774 00:11:57.774 Discovery Log Number of Records 2, Generation counter 2 00:11:57.774 =====Discovery Log Entry 0====== 00:11:57.774 trtype: tcp 00:11:57.774 adrfam: ipv4 00:11:57.774 subtype: current discovery subsystem 00:11:57.774 treq: not required 00:11:57.774 portid: 0 00:11:57.774 trsvcid: 4420 00:11:57.774 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:57.774 traddr: 10.0.0.2 00:11:57.774 eflags: explicit discovery connections, duplicate discovery information 00:11:57.774 sectype: none 00:11:57.774 =====Discovery Log Entry 1====== 00:11:57.774 trtype: tcp 00:11:57.774 adrfam: ipv4 00:11:57.774 subtype: nvme subsystem 00:11:57.774 treq: not required 00:11:57.774 portid: 0 00:11:57.774 trsvcid: 4420 00:11:57.774 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:57.774 traddr: 10.0.0.2 00:11:57.774 eflags: none 00:11:57.774 sectype: none 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:57.774 01:14:33 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.775 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:57.775 01:14:33 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.151 01:14:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:59.151 01:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:11:59.151 01:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.151 01:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:11:59.151 01:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:11:59.151 01:14:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.055 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:01.314 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:01.314 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.314 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:01.314 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.314 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:01.315 /dev/nvme0n1 ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.315 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.315 rmmod nvme_tcp 00:12:01.315 rmmod nvme_fabrics 00:12:01.315 rmmod nvme_keyring 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 4022546 ']' 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 4022546 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 4022546 ']' 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 4022546 00:12:01.315 01:14:36 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4022546 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4022546' 00:12:01.575 killing process with pid 4022546 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 4022546 00:12:01.575 [2024-05-15 01:14:37.056557] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:01.575 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 4022546 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.835 01:14:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.742 01:14:39 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.742 00:12:03.742 real 0m14.088s 00:12:03.742 user 0m21.359s 00:12:03.742 sys 0m5.902s 00:12:03.742 01:14:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:03.742 01:14:39 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.742 ************************************ 00:12:03.742 END TEST nvmf_nvme_cli 00:12:03.742 ************************************ 00:12:03.742 01:14:39 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:03.742 01:14:39 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.742 01:14:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:03.742 01:14:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:03.742 01:14:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:04.002 ************************************ 00:12:04.002 START TEST nvmf_vfio_user 00:12:04.002 ************************************ 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:04.002 * Looking for test storage... 00:12:04.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:04.002 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4024008 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4024008' 00:12:04.003 Process pid: 4024008 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4024008 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 4024008 ']' 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:04.003 01:14:39 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:04.003 [2024-05-15 01:14:39.666022] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:12:04.003 [2024-05-15 01:14:39.666075] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.262 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.262 [2024-05-15 01:14:39.735575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.262 [2024-05-15 01:14:39.809902] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.262 [2024-05-15 01:14:39.809936] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.262 [2024-05-15 01:14:39.809946] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.262 [2024-05-15 01:14:39.809955] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.262 [2024-05-15 01:14:39.809962] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.262 [2024-05-15 01:14:39.810011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.262 [2024-05-15 01:14:39.810106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.262 [2024-05-15 01:14:39.810188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.262 [2024-05-15 01:14:39.810195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.830 01:14:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.830 01:14:40 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:04.830 01:14:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:06.207 Malloc1 00:12:06.207 01:14:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:06.466 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:06.725 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:06.725 [2024-05-15 01:14:42.396850] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:06.985 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.985 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:06.985 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:06.985 Malloc2 00:12:06.985 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:07.244 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:07.503 01:14:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:07.503 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:07.503 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:07.503 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.503 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:07.503 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:07.503 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:07.503 [2024-05-15 01:14:43.160040] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:12:07.503 [2024-05-15 01:14:43.160070] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4024566 ] 00:12:07.503 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.503 [2024-05-15 01:14:43.190528] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:07.503 [2024-05-15 01:14:43.193619] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.503 [2024-05-15 01:14:43.193638] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f704d5a6000 00:12:07.764 [2024-05-15 01:14:43.194620] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.195625] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.196629] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.197634] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.198637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.199649] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.200652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.201658] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.764 [2024-05-15 01:14:43.202665] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.764 [2024-05-15 01:14:43.202680] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f704d59b000 00:12:07.764 [2024-05-15 01:14:43.203575] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.764 [2024-05-15 01:14:43.211868] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:07.764 [2024-05-15 01:14:43.211895] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:07.764 [2024-05-15 01:14:43.216753] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:07.764 [2024-05-15 01:14:43.216794] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:07.764 [2024-05-15 01:14:43.216865] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:07.764 [2024-05-15 01:14:43.216884] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:07.764 [2024-05-15 01:14:43.216891] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:07.764 [2024-05-15 01:14:43.217748] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:07.764 [2024-05-15 01:14:43.217758] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:07.764 [2024-05-15 01:14:43.217767] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:07.764 [2024-05-15 01:14:43.218754] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:07.764 [2024-05-15 01:14:43.218763] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:07.764 [2024-05-15 01:14:43.218772] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:07.764 [2024-05-15 01:14:43.219763] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:07.764 [2024-05-15 01:14:43.219772] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:07.764 [2024-05-15 01:14:43.220769] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:07.764 [2024-05-15 01:14:43.220778] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:07.764 [2024-05-15 01:14:43.220784] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:07.764 [2024-05-15 01:14:43.220793] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:07.764 [2024-05-15 01:14:43.220900] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:07.764 [2024-05-15 01:14:43.220906] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:07.764 [2024-05-15 01:14:43.220912] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:07.764 [2024-05-15 01:14:43.221777] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:07.764 [2024-05-15 01:14:43.222784] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:07.764 [2024-05-15 01:14:43.223792] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:07.764 [2024-05-15 01:14:43.224791] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.764 [2024-05-15 01:14:43.224860] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:07.764 [2024-05-15 01:14:43.225803] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:07.764 [2024-05-15 01:14:43.225812] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:07.764 [2024-05-15 01:14:43.225820] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:07.764 [2024-05-15 01:14:43.225841] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:07.764 [2024-05-15 01:14:43.225854] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:07.764 [2024-05-15 01:14:43.225871] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.764 [2024-05-15 01:14:43.225877] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.764 [2024-05-15 01:14:43.225891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.764 [2024-05-15 01:14:43.225930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:07.764 [2024-05-15 01:14:43.225940] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:07.764 [2024-05-15 01:14:43.225947] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:07.764 [2024-05-15 01:14:43.225952] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:07.764 [2024-05-15 01:14:43.225958] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:07.764 [2024-05-15 01:14:43.225964] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:07.764 [2024-05-15 01:14:43.225971] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:07.764 [2024-05-15 01:14:43.225977] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:07.764 [2024-05-15 01:14:43.225988] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:07.764 [2024-05-15 01:14:43.226003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:07.764 [2024-05-15 01:14:43.226013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:07.764 [2024-05-15 01:14:43.226026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.764 [2024-05-15 01:14:43.226036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.764 [2024-05-15 01:14:43.226044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.764 [2024-05-15 01:14:43.226053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.764 [2024-05-15 01:14:43.226059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:07.764 [2024-05-15 01:14:43.226068] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:07.764 [2024-05-15 01:14:43.226077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:07.764 [2024-05-15 01:14:43.226087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:07.764 [2024-05-15 01:14:43.226094] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:07.764 [2024-05-15 01:14:43.226105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226113] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226120] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226129] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226183] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226195] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226204] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:07.765 [2024-05-15 01:14:43.226210] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:07.765 [2024-05-15 01:14:43.226216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226246] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:07.765 [2024-05-15 01:14:43.226255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226264] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226272] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.765 [2024-05-15 01:14:43.226278] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.765 [2024-05-15 01:14:43.226284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226320] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226328] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.765 [2024-05-15 01:14:43.226334] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.765 [2024-05-15 01:14:43.226340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226378] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226386] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226393] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226399] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226406] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:07.765 [2024-05-15 01:14:43.226412] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:07.765 [2024-05-15 01:14:43.226418] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:07.765 [2024-05-15 01:14:43.226438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226525] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:07.765 [2024-05-15 01:14:43.226531] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:07.765 [2024-05-15 01:14:43.226535] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:07.765 [2024-05-15 01:14:43.226540] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:07.765 [2024-05-15 01:14:43.226547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:07.765 [2024-05-15 01:14:43.226555] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:07.765 [2024-05-15 01:14:43.226561] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:07.765 [2024-05-15 01:14:43.226568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226575] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:07.765 [2024-05-15 01:14:43.226581] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.765 [2024-05-15 01:14:43.226588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226598] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:07.765 [2024-05-15 01:14:43.226605] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:07.765 [2024-05-15 01:14:43.226612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:07.765 [2024-05-15 01:14:43.226620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:07.765 [2024-05-15 01:14:43.226657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:07.765 ===================================================== 00:12:07.765 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:07.765 ===================================================== 00:12:07.765 Controller Capabilities/Features 00:12:07.765 ================================ 00:12:07.765 Vendor ID: 4e58 00:12:07.765 Subsystem Vendor ID: 4e58 00:12:07.765 Serial Number: SPDK1 00:12:07.765 Model Number: SPDK bdev Controller 00:12:07.765 Firmware Version: 24.05 00:12:07.765 Recommended Arb Burst: 6 00:12:07.765 IEEE OUI Identifier: 8d 6b 50 00:12:07.765 Multi-path I/O 00:12:07.765 May have multiple subsystem ports: Yes 00:12:07.765 May have multiple controllers: Yes 00:12:07.765 Associated with SR-IOV VF: No 00:12:07.765 Max Data Transfer Size: 131072 00:12:07.765 Max Number of Namespaces: 32 00:12:07.765 Max Number of I/O Queues: 127 00:12:07.765 NVMe Specification Version (VS): 1.3 00:12:07.765 NVMe Specification Version (Identify): 1.3 00:12:07.765 Maximum Queue Entries: 256 00:12:07.765 Contiguous Queues Required: Yes 00:12:07.765 Arbitration Mechanisms Supported 00:12:07.765 Weighted Round Robin: Not Supported 00:12:07.765 Vendor Specific: Not Supported 00:12:07.765 Reset Timeout: 15000 ms 00:12:07.765 Doorbell Stride: 4 bytes 00:12:07.765 NVM Subsystem Reset: Not Supported 00:12:07.765 Command Sets Supported 00:12:07.765 NVM Command Set: Supported 00:12:07.765 Boot Partition: Not Supported 00:12:07.765 Memory Page Size Minimum: 4096 bytes 00:12:07.765 Memory Page Size Maximum: 4096 bytes 00:12:07.765 Persistent Memory Region: Not Supported 00:12:07.765 Optional Asynchronous Events Supported 00:12:07.765 Namespace Attribute Notices: Supported 00:12:07.765 Firmware Activation Notices: Not Supported 00:12:07.765 ANA Change Notices: Not Supported 00:12:07.765 PLE Aggregate Log Change Notices: Not Supported 00:12:07.765 LBA Status Info Alert Notices: Not Supported 00:12:07.765 EGE Aggregate Log Change Notices: Not Supported 00:12:07.765 Normal NVM Subsystem Shutdown event: Not Supported 00:12:07.765 Zone Descriptor Change Notices: Not Supported 00:12:07.765 Discovery Log Change Notices: Not Supported 00:12:07.765 Controller Attributes 00:12:07.765 128-bit Host Identifier: Supported 00:12:07.765 Non-Operational Permissive Mode: Not Supported 00:12:07.765 NVM Sets: Not Supported 00:12:07.765 Read Recovery Levels: Not Supported 00:12:07.765 Endurance Groups: Not Supported 00:12:07.765 Predictable Latency Mode: Not Supported 00:12:07.765 Traffic Based Keep ALive: Not Supported 00:12:07.765 Namespace Granularity: Not Supported 00:12:07.765 SQ Associations: Not Supported 00:12:07.765 UUID List: Not Supported 00:12:07.765 Multi-Domain Subsystem: Not Supported 00:12:07.765 Fixed Capacity Management: Not Supported 00:12:07.765 Variable Capacity Management: Not Supported 00:12:07.765 Delete Endurance Group: Not Supported 00:12:07.765 Delete NVM Set: Not Supported 00:12:07.765 Extended LBA Formats Supported: Not Supported 00:12:07.766 Flexible Data Placement Supported: Not Supported 00:12:07.766 00:12:07.766 Controller Memory Buffer Support 00:12:07.766 ================================ 00:12:07.766 Supported: No 00:12:07.766 00:12:07.766 Persistent Memory Region Support 00:12:07.766 ================================ 00:12:07.766 Supported: No 00:12:07.766 00:12:07.766 Admin Command Set Attributes 00:12:07.766 ============================ 00:12:07.766 Security Send/Receive: Not Supported 00:12:07.766 Format NVM: Not Supported 00:12:07.766 Firmware Activate/Download: Not Supported 00:12:07.766 Namespace Management: Not Supported 00:12:07.766 Device Self-Test: Not Supported 00:12:07.766 Directives: Not Supported 00:12:07.766 NVMe-MI: Not Supported 00:12:07.766 Virtualization Management: Not Supported 00:12:07.766 Doorbell Buffer Config: Not Supported 00:12:07.766 Get LBA Status Capability: Not Supported 00:12:07.766 Command & Feature Lockdown Capability: Not Supported 00:12:07.766 Abort Command Limit: 4 00:12:07.766 Async Event Request Limit: 4 00:12:07.766 Number of Firmware Slots: N/A 00:12:07.766 Firmware Slot 1 Read-Only: N/A 00:12:07.766 Firmware Activation Without Reset: N/A 00:12:07.766 Multiple Update Detection Support: N/A 00:12:07.766 Firmware Update Granularity: No Information Provided 00:12:07.766 Per-Namespace SMART Log: No 00:12:07.766 Asymmetric Namespace Access Log Page: Not Supported 00:12:07.766 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:07.766 Command Effects Log Page: Supported 00:12:07.766 Get Log Page Extended Data: Supported 00:12:07.766 Telemetry Log Pages: Not Supported 00:12:07.766 Persistent Event Log Pages: Not Supported 00:12:07.766 Supported Log Pages Log Page: May Support 00:12:07.766 Commands Supported & Effects Log Page: Not Supported 00:12:07.766 Feature Identifiers & Effects Log Page:May Support 00:12:07.766 NVMe-MI Commands & Effects Log Page: May Support 00:12:07.766 Data Area 4 for Telemetry Log: Not Supported 00:12:07.766 Error Log Page Entries Supported: 128 00:12:07.766 Keep Alive: Supported 00:12:07.766 Keep Alive Granularity: 10000 ms 00:12:07.766 00:12:07.766 NVM Command Set Attributes 00:12:07.766 ========================== 00:12:07.766 Submission Queue Entry Size 00:12:07.766 Max: 64 00:12:07.766 Min: 64 00:12:07.766 Completion Queue Entry Size 00:12:07.766 Max: 16 00:12:07.766 Min: 16 00:12:07.766 Number of Namespaces: 32 00:12:07.766 Compare Command: Supported 00:12:07.766 Write Uncorrectable Command: Not Supported 00:12:07.766 Dataset Management Command: Supported 00:12:07.766 Write Zeroes Command: Supported 00:12:07.766 Set Features Save Field: Not Supported 00:12:07.766 Reservations: Not Supported 00:12:07.766 Timestamp: Not Supported 00:12:07.766 Copy: Supported 00:12:07.766 Volatile Write Cache: Present 00:12:07.766 Atomic Write Unit (Normal): 1 00:12:07.766 Atomic Write Unit (PFail): 1 00:12:07.766 Atomic Compare & Write Unit: 1 00:12:07.766 Fused Compare & Write: Supported 00:12:07.766 Scatter-Gather List 00:12:07.766 SGL Command Set: Supported (Dword aligned) 00:12:07.766 SGL Keyed: Not Supported 00:12:07.766 SGL Bit Bucket Descriptor: Not Supported 00:12:07.766 SGL Metadata Pointer: Not Supported 00:12:07.766 Oversized SGL: Not Supported 00:12:07.766 SGL Metadata Address: Not Supported 00:12:07.766 SGL Offset: Not Supported 00:12:07.766 Transport SGL Data Block: Not Supported 00:12:07.766 Replay Protected Memory Block: Not Supported 00:12:07.766 00:12:07.766 Firmware Slot Information 00:12:07.766 ========================= 00:12:07.766 Active slot: 1 00:12:07.766 Slot 1 Firmware Revision: 24.05 00:12:07.766 00:12:07.766 00:12:07.766 Commands Supported and Effects 00:12:07.766 ============================== 00:12:07.766 Admin Commands 00:12:07.766 -------------- 00:12:07.766 Get Log Page (02h): Supported 00:12:07.766 Identify (06h): Supported 00:12:07.766 Abort (08h): Supported 00:12:07.766 Set Features (09h): Supported 00:12:07.766 Get Features (0Ah): Supported 00:12:07.766 Asynchronous Event Request (0Ch): Supported 00:12:07.766 Keep Alive (18h): Supported 00:12:07.766 I/O Commands 00:12:07.766 ------------ 00:12:07.766 Flush (00h): Supported LBA-Change 00:12:07.766 Write (01h): Supported LBA-Change 00:12:07.766 Read (02h): Supported 00:12:07.766 Compare (05h): Supported 00:12:07.766 Write Zeroes (08h): Supported LBA-Change 00:12:07.766 Dataset Management (09h): Supported LBA-Change 00:12:07.766 Copy (19h): Supported LBA-Change 00:12:07.766 Unknown (79h): Supported LBA-Change 00:12:07.766 Unknown (7Ah): Supported 00:12:07.766 00:12:07.766 Error Log 00:12:07.766 ========= 00:12:07.766 00:12:07.766 Arbitration 00:12:07.766 =========== 00:12:07.766 Arbitration Burst: 1 00:12:07.766 00:12:07.766 Power Management 00:12:07.766 ================ 00:12:07.766 Number of Power States: 1 00:12:07.766 Current Power State: Power State #0 00:12:07.766 Power State #0: 00:12:07.766 Max Power: 0.00 W 00:12:07.766 Non-Operational State: Operational 00:12:07.766 Entry Latency: Not Reported 00:12:07.766 Exit Latency: Not Reported 00:12:07.766 Relative Read Throughput: 0 00:12:07.766 Relative Read Latency: 0 00:12:07.766 Relative Write Throughput: 0 00:12:07.766 Relative Write Latency: 0 00:12:07.766 Idle Power: Not Reported 00:12:07.766 Active Power: Not Reported 00:12:07.766 Non-Operational Permissive Mode: Not Supported 00:12:07.766 00:12:07.766 Health Information 00:12:07.766 ================== 00:12:07.766 Critical Warnings: 00:12:07.766 Available Spare Space: OK 00:12:07.766 Temperature: OK 00:12:07.766 Device Reliability: OK 00:12:07.766 Read Only: No 00:12:07.766 Volatile Memory Backup: OK 00:12:07.766 Current Temperature: 0 Kelvin (-2[2024-05-15 01:14:43.226745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:07.766 [2024-05-15 01:14:43.226757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:07.766 [2024-05-15 01:14:43.226783] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:07.766 [2024-05-15 01:14:43.226793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.766 [2024-05-15 01:14:43.226801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.766 [2024-05-15 01:14:43.226808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.766 [2024-05-15 01:14:43.226816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.766 [2024-05-15 01:14:43.230198] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:07.766 [2024-05-15 01:14:43.230211] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:07.766 [2024-05-15 01:14:43.230826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.766 [2024-05-15 01:14:43.230873] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:07.766 [2024-05-15 01:14:43.230880] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:07.766 [2024-05-15 01:14:43.231836] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:07.766 [2024-05-15 01:14:43.231848] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:07.766 [2024-05-15 01:14:43.231896] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:07.766 [2024-05-15 01:14:43.232864] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.766 73 Celsius) 00:12:07.766 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:07.766 Available Spare: 0% 00:12:07.766 Available Spare Threshold: 0% 00:12:07.766 Life Percentage Used: 0% 00:12:07.766 Data Units Read: 0 00:12:07.766 Data Units Written: 0 00:12:07.766 Host Read Commands: 0 00:12:07.766 Host Write Commands: 0 00:12:07.766 Controller Busy Time: 0 minutes 00:12:07.766 Power Cycles: 0 00:12:07.766 Power On Hours: 0 hours 00:12:07.766 Unsafe Shutdowns: 0 00:12:07.766 Unrecoverable Media Errors: 0 00:12:07.766 Lifetime Error Log Entries: 0 00:12:07.766 Warning Temperature Time: 0 minutes 00:12:07.766 Critical Temperature Time: 0 minutes 00:12:07.766 00:12:07.766 Number of Queues 00:12:07.766 ================ 00:12:07.766 Number of I/O Submission Queues: 127 00:12:07.766 Number of I/O Completion Queues: 127 00:12:07.766 00:12:07.766 Active Namespaces 00:12:07.766 ================= 00:12:07.766 Namespace ID:1 00:12:07.766 Error Recovery Timeout: Unlimited 00:12:07.766 Command Set Identifier: NVM (00h) 00:12:07.766 Deallocate: Supported 00:12:07.766 Deallocated/Unwritten Error: Not Supported 00:12:07.766 Deallocated Read Value: Unknown 00:12:07.766 Deallocate in Write Zeroes: Not Supported 00:12:07.766 Deallocated Guard Field: 0xFFFF 00:12:07.766 Flush: Supported 00:12:07.766 Reservation: Supported 00:12:07.766 Namespace Sharing Capabilities: Multiple Controllers 00:12:07.766 Size (in LBAs): 131072 (0GiB) 00:12:07.766 Capacity (in LBAs): 131072 (0GiB) 00:12:07.767 Utilization (in LBAs): 131072 (0GiB) 00:12:07.767 NGUID: DFB1867683A448AD813EED7A304A7291 00:12:07.767 UUID: dfb18676-83a4-48ad-813e-ed7a304a7291 00:12:07.767 Thin Provisioning: Not Supported 00:12:07.767 Per-NS Atomic Units: Yes 00:12:07.767 Atomic Boundary Size (Normal): 0 00:12:07.767 Atomic Boundary Size (PFail): 0 00:12:07.767 Atomic Boundary Offset: 0 00:12:07.767 Maximum Single Source Range Length: 65535 00:12:07.767 Maximum Copy Length: 65535 00:12:07.767 Maximum Source Range Count: 1 00:12:07.767 NGUID/EUI64 Never Reused: No 00:12:07.767 Namespace Write Protected: No 00:12:07.767 Number of LBA Formats: 1 00:12:07.767 Current LBA Format: LBA Format #00 00:12:07.767 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:07.767 00:12:07.767 01:14:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:07.767 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.767 [2024-05-15 01:14:43.449948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:13.037 Initializing NVMe Controllers 00:12:13.037 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:13.037 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:13.037 Initialization complete. Launching workers. 00:12:13.037 ======================================================== 00:12:13.037 Latency(us) 00:12:13.037 Device Information : IOPS MiB/s Average min max 00:12:13.037 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39949.29 156.05 3203.67 902.84 6745.88 00:12:13.037 ======================================================== 00:12:13.037 Total : 39949.29 156.05 3203.67 902.84 6745.88 00:12:13.037 00:12:13.037 [2024-05-15 01:14:48.467664] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:13.037 01:14:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:13.037 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.037 [2024-05-15 01:14:48.681692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.393 Initializing NVMe Controllers 00:12:18.393 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:18.393 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:18.393 Initialization complete. Launching workers. 00:12:18.393 ======================================================== 00:12:18.393 Latency(us) 00:12:18.393 Device Information : IOPS MiB/s Average min max 00:12:18.393 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16054.55 62.71 7978.20 5981.85 8982.73 00:12:18.393 ======================================================== 00:12:18.393 Total : 16054.55 62.71 7978.20 5981.85 8982.73 00:12:18.393 00:12:18.393 [2024-05-15 01:14:53.725564] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.393 01:14:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:18.393 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.393 [2024-05-15 01:14:53.936554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.666 [2024-05-15 01:14:58.995460] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.666 Initializing NVMe Controllers 00:12:23.666 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.666 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.666 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:23.666 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:23.666 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:23.666 Initialization complete. Launching workers. 00:12:23.666 Starting thread on core 2 00:12:23.666 Starting thread on core 3 00:12:23.666 Starting thread on core 1 00:12:23.666 01:14:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:23.666 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.666 [2024-05-15 01:14:59.303594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.860 [2024-05-15 01:15:02.690411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.860 Initializing NVMe Controllers 00:12:27.860 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.860 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:27.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:27.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:27.860 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:27.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:27.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:27.860 Initialization complete. Launching workers. 00:12:27.860 Starting thread on core 1 with urgent priority queue 00:12:27.860 Starting thread on core 2 with urgent priority queue 00:12:27.860 Starting thread on core 3 with urgent priority queue 00:12:27.860 Starting thread on core 0 with urgent priority queue 00:12:27.860 SPDK bdev Controller (SPDK1 ) core 0: 8000.00 IO/s 12.50 secs/100000 ios 00:12:27.860 SPDK bdev Controller (SPDK1 ) core 1: 7203.33 IO/s 13.88 secs/100000 ios 00:12:27.860 SPDK bdev Controller (SPDK1 ) core 2: 7876.33 IO/s 12.70 secs/100000 ios 00:12:27.860 SPDK bdev Controller (SPDK1 ) core 3: 8831.00 IO/s 11.32 secs/100000 ios 00:12:27.860 ======================================================== 00:12:27.860 00:12:27.860 01:15:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.860 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.860 [2024-05-15 01:15:02.977649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.860 Initializing NVMe Controllers 00:12:27.860 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.860 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:27.860 Namespace ID: 1 size: 0GB 00:12:27.860 Initialization complete. 00:12:27.860 INFO: using host memory buffer for IO 00:12:27.860 Hello world! 00:12:27.860 [2024-05-15 01:15:03.012017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.860 01:15:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:27.860 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.860 [2024-05-15 01:15:03.290481] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.793 Initializing NVMe Controllers 00:12:28.793 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.793 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.793 Initialization complete. Launching workers. 00:12:28.793 submit (in ns) avg, min, max = 6397.5, 3091.2, 3999679.2 00:12:28.793 complete (in ns) avg, min, max = 20091.6, 1721.6, 3998776.0 00:12:28.793 00:12:28.793 Submit histogram 00:12:28.793 ================ 00:12:28.793 Range in us Cumulative Count 00:12:28.793 3.085 - 3.098: 0.0239% ( 4) 00:12:28.793 3.098 - 3.110: 0.2983% ( 46) 00:12:28.793 3.110 - 3.123: 1.0202% ( 121) 00:12:28.793 3.123 - 3.136: 2.4700% ( 243) 00:12:28.793 3.136 - 3.149: 5.4054% ( 492) 00:12:28.793 3.149 - 3.162: 8.6749% ( 548) 00:12:28.793 3.162 - 3.174: 12.7737% ( 687) 00:12:28.793 3.174 - 3.187: 17.9106% ( 861) 00:12:28.793 3.187 - 3.200: 23.3817% ( 917) 00:12:28.793 3.200 - 3.213: 28.9959% ( 941) 00:12:28.793 3.213 - 3.226: 35.0755% ( 1019) 00:12:28.793 3.226 - 3.238: 42.0679% ( 1172) 00:12:28.793 3.238 - 3.251: 48.1952% ( 1027) 00:12:28.793 3.251 - 3.264: 52.5267% ( 726) 00:12:28.793 3.264 - 3.277: 56.3570% ( 642) 00:12:28.793 3.277 - 3.302: 64.3219% ( 1335) 00:12:28.793 3.302 - 3.328: 71.0757% ( 1132) 00:12:28.793 3.328 - 3.354: 76.8809% ( 973) 00:12:28.793 3.354 - 3.379: 84.7921% ( 1326) 00:12:28.793 3.379 - 3.405: 87.2919% ( 419) 00:12:28.793 3.405 - 3.430: 88.1988% ( 152) 00:12:28.793 3.430 - 3.456: 88.9684% ( 129) 00:12:28.793 3.456 - 3.482: 90.0722% ( 185) 00:12:28.793 3.482 - 3.507: 91.4146% ( 225) 00:12:28.793 3.507 - 3.533: 93.3059% ( 317) 00:12:28.793 3.533 - 3.558: 94.9168% ( 270) 00:12:28.793 3.558 - 3.584: 96.1816% ( 212) 00:12:28.793 3.584 - 3.610: 97.3391% ( 194) 00:12:28.793 3.610 - 3.635: 98.3832% ( 175) 00:12:28.793 3.635 - 3.661: 98.8783% ( 83) 00:12:28.793 3.661 - 3.686: 99.1349% ( 43) 00:12:28.793 3.686 - 3.712: 99.3676% ( 39) 00:12:28.793 3.712 - 3.738: 99.5704% ( 34) 00:12:28.793 3.738 - 3.763: 99.6062% ( 6) 00:12:28.793 3.763 - 3.789: 99.6301% ( 4) 00:12:28.793 3.789 - 3.814: 99.6361% ( 1) 00:12:28.793 3.814 - 3.840: 99.6420% ( 1) 00:12:28.793 4.173 - 4.198: 99.6480% ( 1) 00:12:28.793 4.531 - 4.557: 99.6540% ( 1) 00:12:28.793 5.274 - 5.299: 99.6599% ( 1) 00:12:28.793 5.555 - 5.581: 99.6659% ( 1) 00:12:28.793 5.837 - 5.862: 99.6719% ( 1) 00:12:28.793 6.093 - 6.118: 99.6778% ( 1) 00:12:28.793 6.144 - 6.170: 99.6838% ( 1) 00:12:28.793 6.170 - 6.195: 99.6898% ( 1) 00:12:28.793 6.195 - 6.221: 99.6957% ( 1) 00:12:28.793 6.323 - 6.349: 99.7017% ( 1) 00:12:28.793 6.477 - 6.502: 99.7077% ( 1) 00:12:28.793 6.502 - 6.528: 99.7136% ( 1) 00:12:28.793 6.554 - 6.605: 99.7196% ( 1) 00:12:28.793 6.707 - 6.758: 99.7315% ( 2) 00:12:28.793 6.758 - 6.810: 99.7435% ( 2) 00:12:28.793 6.810 - 6.861: 99.7494% ( 1) 00:12:28.793 6.861 - 6.912: 99.7554% ( 1) 00:12:28.793 6.912 - 6.963: 99.7614% ( 1) 00:12:28.793 6.963 - 7.014: 99.7673% ( 1) 00:12:28.793 7.014 - 7.066: 99.7792% ( 2) 00:12:28.793 7.117 - 7.168: 99.7852% ( 1) 00:12:28.793 7.168 - 7.219: 99.7971% ( 2) 00:12:28.793 7.322 - 7.373: 99.8031% ( 1) 00:12:28.793 7.373 - 7.424: 99.8150% ( 2) 00:12:28.793 7.424 - 7.475: 99.8210% ( 1) 00:12:28.793 7.475 - 7.526: 99.8329% ( 2) 00:12:28.793 7.526 - 7.578: 99.8389% ( 1) 00:12:28.793 7.629 - 7.680: 99.8449% ( 1) 00:12:28.793 7.680 - 7.731: 99.8628% ( 3) 00:12:28.793 7.731 - 7.782: 99.8687% ( 1) 00:12:28.793 7.782 - 7.834: 99.8747% ( 1) 00:12:28.793 7.834 - 7.885: 99.8807% ( 1) 00:12:28.793 7.987 - 8.038: 99.8866% ( 1) 00:12:28.793 8.038 - 8.090: 99.8926% ( 1) 00:12:28.793 8.602 - 8.653: 99.8986% ( 1) 00:12:28.793 9.062 - 9.114: 99.9045% ( 1) 00:12:28.793 10.803 - 10.854: 99.9105% ( 1) 00:12:28.793 15.462 - 15.565: 99.9165% ( 1) 00:12:28.793 15.974 - 16.077: 99.9224% ( 1) 00:12:28.793 3984.589 - 4010.803: 100.0000% ( 13) 00:12:28.793 00:12:28.793 Complete histogram 00:12:28.793 ================== 00:12:28.793 Range in us Cumulative Count 00:12:28.793 1.715 - [2024-05-15 01:15:04.312571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.793 1.728: 0.1850% ( 31) 00:12:28.793 1.728 - 1.741: 11.3657% ( 1874) 00:12:28.793 1.741 - 1.754: 34.8010% ( 3928) 00:12:28.793 1.754 - 1.766: 40.4689% ( 950) 00:12:28.793 1.766 - 1.779: 43.3924% ( 490) 00:12:28.793 1.779 - 1.792: 50.3013% ( 1158) 00:12:28.793 1.792 - 1.805: 78.7244% ( 4764) 00:12:28.793 1.805 - 1.818: 91.2535% ( 2100) 00:12:28.793 1.818 - 1.830: 95.1375% ( 651) 00:12:28.793 1.830 - 1.843: 97.2734% ( 358) 00:12:28.793 1.843 - 1.856: 97.8044% ( 89) 00:12:28.793 1.856 - 1.869: 98.3772% ( 96) 00:12:28.793 1.869 - 1.882: 98.9738% ( 100) 00:12:28.793 1.882 - 1.894: 99.2542% ( 47) 00:12:28.793 1.894 - 1.907: 99.3258% ( 12) 00:12:28.793 1.907 - 1.920: 99.3676% ( 7) 00:12:28.793 1.920 - 1.933: 99.3855% ( 3) 00:12:28.793 1.933 - 1.946: 99.3914% ( 1) 00:12:28.793 1.958 - 1.971: 99.3974% ( 1) 00:12:28.793 1.997 - 2.010: 99.4034% ( 1) 00:12:28.793 2.010 - 2.022: 99.4093% ( 1) 00:12:28.793 4.454 - 4.480: 99.4153% ( 1) 00:12:28.793 4.582 - 4.608: 99.4213% ( 1) 00:12:28.793 4.710 - 4.736: 99.4272% ( 1) 00:12:28.793 5.120 - 5.146: 99.4332% ( 1) 00:12:28.793 5.299 - 5.325: 99.4392% ( 1) 00:12:28.793 5.325 - 5.350: 99.4451% ( 1) 00:12:28.793 5.402 - 5.427: 99.4511% ( 1) 00:12:28.793 5.427 - 5.453: 99.4571% ( 1) 00:12:28.793 5.530 - 5.555: 99.4630% ( 1) 00:12:28.793 5.581 - 5.606: 99.4690% ( 1) 00:12:28.793 5.658 - 5.683: 99.4750% ( 1) 00:12:28.793 5.786 - 5.811: 99.4869% ( 2) 00:12:28.793 6.093 - 6.118: 99.4929% ( 1) 00:12:28.793 6.195 - 6.221: 99.4988% ( 1) 00:12:28.793 6.298 - 6.323: 99.5048% ( 1) 00:12:28.793 6.451 - 6.477: 99.5108% ( 1) 00:12:28.793 6.912 - 6.963: 99.5167% ( 1) 00:12:28.793 7.322 - 7.373: 99.5227% ( 1) 00:12:28.793 11.571 - 11.622: 99.5287% ( 1) 00:12:28.793 14.746 - 14.848: 99.5346% ( 1) 00:12:28.793 16.282 - 16.384: 99.5406% ( 1) 00:12:28.793 3158.835 - 3171.942: 99.5466% ( 1) 00:12:28.793 3984.589 - 4010.803: 100.0000% ( 76) 00:12:28.793 00:12:28.793 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:28.793 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:28.793 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:28.793 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:28.793 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.051 [ 00:12:29.051 { 00:12:29.051 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.051 "subtype": "Discovery", 00:12:29.051 "listen_addresses": [], 00:12:29.051 "allow_any_host": true, 00:12:29.051 "hosts": [] 00:12:29.051 }, 00:12:29.051 { 00:12:29.051 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.051 "subtype": "NVMe", 00:12:29.051 "listen_addresses": [ 00:12:29.051 { 00:12:29.051 "trtype": "VFIOUSER", 00:12:29.051 "adrfam": "IPv4", 00:12:29.051 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.051 "trsvcid": "0" 00:12:29.051 } 00:12:29.051 ], 00:12:29.051 "allow_any_host": true, 00:12:29.051 "hosts": [], 00:12:29.051 "serial_number": "SPDK1", 00:12:29.051 "model_number": "SPDK bdev Controller", 00:12:29.051 "max_namespaces": 32, 00:12:29.051 "min_cntlid": 1, 00:12:29.051 "max_cntlid": 65519, 00:12:29.051 "namespaces": [ 00:12:29.051 { 00:12:29.051 "nsid": 1, 00:12:29.051 "bdev_name": "Malloc1", 00:12:29.051 "name": "Malloc1", 00:12:29.051 "nguid": "DFB1867683A448AD813EED7A304A7291", 00:12:29.051 "uuid": "dfb18676-83a4-48ad-813e-ed7a304a7291" 00:12:29.051 } 00:12:29.051 ] 00:12:29.051 }, 00:12:29.051 { 00:12:29.051 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.051 "subtype": "NVMe", 00:12:29.051 "listen_addresses": [ 00:12:29.051 { 00:12:29.051 "trtype": "VFIOUSER", 00:12:29.051 "adrfam": "IPv4", 00:12:29.051 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.051 "trsvcid": "0" 00:12:29.051 } 00:12:29.051 ], 00:12:29.051 "allow_any_host": true, 00:12:29.051 "hosts": [], 00:12:29.051 "serial_number": "SPDK2", 00:12:29.051 "model_number": "SPDK bdev Controller", 00:12:29.051 "max_namespaces": 32, 00:12:29.051 "min_cntlid": 1, 00:12:29.051 "max_cntlid": 65519, 00:12:29.051 "namespaces": [ 00:12:29.051 { 00:12:29.051 "nsid": 1, 00:12:29.051 "bdev_name": "Malloc2", 00:12:29.051 "name": "Malloc2", 00:12:29.051 "nguid": "B87762E146734EB285F4D1D2AE197074", 00:12:29.051 "uuid": "b87762e1-4673-4eb2-85f4-d1d2ae197074" 00:12:29.051 } 00:12:29.051 ] 00:12:29.051 } 00:12:29.051 ] 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4028570 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:29.051 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:29.051 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.051 [2024-05-15 01:15:04.708034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:29.051 Malloc3 00:12:29.310 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:29.310 [2024-05-15 01:15:04.896370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.310 01:15:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.310 Asynchronous Event Request test 00:12:29.310 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.310 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.310 Registering asynchronous event callbacks... 00:12:29.310 Starting namespace attribute notice tests for all controllers... 00:12:29.310 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.310 aer_cb - Changed Namespace 00:12:29.310 Cleaning up... 00:12:29.570 [ 00:12:29.570 { 00:12:29.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.570 "subtype": "Discovery", 00:12:29.570 "listen_addresses": [], 00:12:29.570 "allow_any_host": true, 00:12:29.570 "hosts": [] 00:12:29.570 }, 00:12:29.570 { 00:12:29.570 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.570 "subtype": "NVMe", 00:12:29.570 "listen_addresses": [ 00:12:29.570 { 00:12:29.570 "trtype": "VFIOUSER", 00:12:29.570 "adrfam": "IPv4", 00:12:29.570 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.570 "trsvcid": "0" 00:12:29.570 } 00:12:29.570 ], 00:12:29.570 "allow_any_host": true, 00:12:29.570 "hosts": [], 00:12:29.570 "serial_number": "SPDK1", 00:12:29.570 "model_number": "SPDK bdev Controller", 00:12:29.570 "max_namespaces": 32, 00:12:29.570 "min_cntlid": 1, 00:12:29.570 "max_cntlid": 65519, 00:12:29.570 "namespaces": [ 00:12:29.570 { 00:12:29.570 "nsid": 1, 00:12:29.570 "bdev_name": "Malloc1", 00:12:29.570 "name": "Malloc1", 00:12:29.570 "nguid": "DFB1867683A448AD813EED7A304A7291", 00:12:29.570 "uuid": "dfb18676-83a4-48ad-813e-ed7a304a7291" 00:12:29.570 }, 00:12:29.570 { 00:12:29.570 "nsid": 2, 00:12:29.570 "bdev_name": "Malloc3", 00:12:29.570 "name": "Malloc3", 00:12:29.570 "nguid": "0B1FB765F2CC4BD9A92C221BC0616A85", 00:12:29.570 "uuid": "0b1fb765-f2cc-4bd9-a92c-221bc0616a85" 00:12:29.570 } 00:12:29.570 ] 00:12:29.570 }, 00:12:29.570 { 00:12:29.570 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.570 "subtype": "NVMe", 00:12:29.570 "listen_addresses": [ 00:12:29.570 { 00:12:29.570 "trtype": "VFIOUSER", 00:12:29.570 "adrfam": "IPv4", 00:12:29.570 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.570 "trsvcid": "0" 00:12:29.570 } 00:12:29.570 ], 00:12:29.570 "allow_any_host": true, 00:12:29.570 "hosts": [], 00:12:29.570 "serial_number": "SPDK2", 00:12:29.570 "model_number": "SPDK bdev Controller", 00:12:29.570 "max_namespaces": 32, 00:12:29.570 "min_cntlid": 1, 00:12:29.570 "max_cntlid": 65519, 00:12:29.570 "namespaces": [ 00:12:29.570 { 00:12:29.570 "nsid": 1, 00:12:29.570 "bdev_name": "Malloc2", 00:12:29.570 "name": "Malloc2", 00:12:29.570 "nguid": "B87762E146734EB285F4D1D2AE197074", 00:12:29.570 "uuid": "b87762e1-4673-4eb2-85f4-d1d2ae197074" 00:12:29.570 } 00:12:29.570 ] 00:12:29.570 } 00:12:29.570 ] 00:12:29.570 01:15:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4028570 00:12:29.570 01:15:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.570 01:15:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:29.570 01:15:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:29.570 01:15:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:29.570 [2024-05-15 01:15:05.138777] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:12:29.570 [2024-05-15 01:15:05.138816] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4028719 ] 00:12:29.570 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.570 [2024-05-15 01:15:05.170422] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:29.570 [2024-05-15 01:15:05.180486] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.570 [2024-05-15 01:15:05.180508] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8e31bfe000 00:12:29.570 [2024-05-15 01:15:05.181487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.182488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.183501] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.184512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.185518] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.186523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.187539] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.188549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.570 [2024-05-15 01:15:05.189556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.570 [2024-05-15 01:15:05.189571] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8e31bf3000 00:12:29.570 [2024-05-15 01:15:05.190462] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.570 [2024-05-15 01:15:05.202664] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:29.570 [2024-05-15 01:15:05.202688] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:29.570 [2024-05-15 01:15:05.204753] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.570 [2024-05-15 01:15:05.204790] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:29.570 [2024-05-15 01:15:05.204860] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:29.570 [2024-05-15 01:15:05.204877] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:29.570 [2024-05-15 01:15:05.204883] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:29.570 [2024-05-15 01:15:05.205758] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:29.570 [2024-05-15 01:15:05.205769] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:29.570 [2024-05-15 01:15:05.205778] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:29.570 [2024-05-15 01:15:05.206759] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.570 [2024-05-15 01:15:05.206769] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:29.570 [2024-05-15 01:15:05.206778] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:29.570 [2024-05-15 01:15:05.207767] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:29.570 [2024-05-15 01:15:05.207777] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:29.570 [2024-05-15 01:15:05.208773] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:29.570 [2024-05-15 01:15:05.208783] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:29.571 [2024-05-15 01:15:05.208789] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:29.571 [2024-05-15 01:15:05.208798] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:29.571 [2024-05-15 01:15:05.208905] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:29.571 [2024-05-15 01:15:05.208911] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:29.571 [2024-05-15 01:15:05.208917] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:29.571 [2024-05-15 01:15:05.209780] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:29.571 [2024-05-15 01:15:05.210782] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:29.571 [2024-05-15 01:15:05.211791] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.571 [2024-05-15 01:15:05.212788] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.571 [2024-05-15 01:15:05.212830] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:29.571 [2024-05-15 01:15:05.213804] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:29.571 [2024-05-15 01:15:05.213815] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:29.571 [2024-05-15 01:15:05.213821] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.213840] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:29.571 [2024-05-15 01:15:05.213849] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.213864] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.571 [2024-05-15 01:15:05.213871] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.571 [2024-05-15 01:15:05.213884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.571 [2024-05-15 01:15:05.220199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:29.571 [2024-05-15 01:15:05.220212] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:29.571 [2024-05-15 01:15:05.220218] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:29.571 [2024-05-15 01:15:05.220224] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:29.571 [2024-05-15 01:15:05.220230] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:29.571 [2024-05-15 01:15:05.220236] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:29.571 [2024-05-15 01:15:05.220243] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:29.571 [2024-05-15 01:15:05.220249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.220261] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.220273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:29.571 [2024-05-15 01:15:05.226199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:29.571 [2024-05-15 01:15:05.226217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.571 [2024-05-15 01:15:05.226226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.571 [2024-05-15 01:15:05.226235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.571 [2024-05-15 01:15:05.226245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.571 [2024-05-15 01:15:05.226254] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.226263] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.226272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:29.571 [2024-05-15 01:15:05.236198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:29.571 [2024-05-15 01:15:05.236208] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:29.571 [2024-05-15 01:15:05.236217] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.236226] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.236233] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.236243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.571 [2024-05-15 01:15:05.244198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:29.571 [2024-05-15 01:15:05.244245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.244255] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.244264] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:29.571 [2024-05-15 01:15:05.244271] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:29.571 [2024-05-15 01:15:05.244278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:29.571 [2024-05-15 01:15:05.252199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:29.571 [2024-05-15 01:15:05.252216] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:29.571 [2024-05-15 01:15:05.252229] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.252238] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.252246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.571 [2024-05-15 01:15:05.252253] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.571 [2024-05-15 01:15:05.252260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.571 [2024-05-15 01:15:05.260198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:29.571 [2024-05-15 01:15:05.260212] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.260221] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:29.571 [2024-05-15 01:15:05.260234] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.571 [2024-05-15 01:15:05.260240] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.571 [2024-05-15 01:15:05.260247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.268199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.268214] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:29.833 [2024-05-15 01:15:05.268223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:29.833 [2024-05-15 01:15:05.268231] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:29.833 [2024-05-15 01:15:05.268239] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:29.833 [2024-05-15 01:15:05.268246] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:29.833 [2024-05-15 01:15:05.268252] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:29.833 [2024-05-15 01:15:05.268259] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:29.833 [2024-05-15 01:15:05.268266] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:29.833 [2024-05-15 01:15:05.268285] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.275201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.275217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.283198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.283214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.291200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.291217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.295200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.295215] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:29.833 [2024-05-15 01:15:05.295221] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:29.833 [2024-05-15 01:15:05.295226] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:29.833 [2024-05-15 01:15:05.295231] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:29.833 [2024-05-15 01:15:05.295238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:29.833 [2024-05-15 01:15:05.295246] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:29.833 [2024-05-15 01:15:05.295252] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:29.833 [2024-05-15 01:15:05.295261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.295269] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:29.833 [2024-05-15 01:15:05.295275] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.833 [2024-05-15 01:15:05.295282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.295293] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:29.833 [2024-05-15 01:15:05.295299] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:29.833 [2024-05-15 01:15:05.295306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:29.833 [2024-05-15 01:15:05.306199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.306216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.306227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:29.833 [2024-05-15 01:15:05.306238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:29.833 ===================================================== 00:12:29.833 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.833 ===================================================== 00:12:29.833 Controller Capabilities/Features 00:12:29.833 ================================ 00:12:29.833 Vendor ID: 4e58 00:12:29.833 Subsystem Vendor ID: 4e58 00:12:29.833 Serial Number: SPDK2 00:12:29.833 Model Number: SPDK bdev Controller 00:12:29.833 Firmware Version: 24.05 00:12:29.833 Recommended Arb Burst: 6 00:12:29.833 IEEE OUI Identifier: 8d 6b 50 00:12:29.833 Multi-path I/O 00:12:29.833 May have multiple subsystem ports: Yes 00:12:29.833 May have multiple controllers: Yes 00:12:29.833 Associated with SR-IOV VF: No 00:12:29.833 Max Data Transfer Size: 131072 00:12:29.833 Max Number of Namespaces: 32 00:12:29.833 Max Number of I/O Queues: 127 00:12:29.833 NVMe Specification Version (VS): 1.3 00:12:29.833 NVMe Specification Version (Identify): 1.3 00:12:29.833 Maximum Queue Entries: 256 00:12:29.833 Contiguous Queues Required: Yes 00:12:29.833 Arbitration Mechanisms Supported 00:12:29.833 Weighted Round Robin: Not Supported 00:12:29.833 Vendor Specific: Not Supported 00:12:29.833 Reset Timeout: 15000 ms 00:12:29.833 Doorbell Stride: 4 bytes 00:12:29.833 NVM Subsystem Reset: Not Supported 00:12:29.833 Command Sets Supported 00:12:29.833 NVM Command Set: Supported 00:12:29.833 Boot Partition: Not Supported 00:12:29.833 Memory Page Size Minimum: 4096 bytes 00:12:29.833 Memory Page Size Maximum: 4096 bytes 00:12:29.833 Persistent Memory Region: Not Supported 00:12:29.833 Optional Asynchronous Events Supported 00:12:29.833 Namespace Attribute Notices: Supported 00:12:29.833 Firmware Activation Notices: Not Supported 00:12:29.833 ANA Change Notices: Not Supported 00:12:29.833 PLE Aggregate Log Change Notices: Not Supported 00:12:29.833 LBA Status Info Alert Notices: Not Supported 00:12:29.833 EGE Aggregate Log Change Notices: Not Supported 00:12:29.833 Normal NVM Subsystem Shutdown event: Not Supported 00:12:29.833 Zone Descriptor Change Notices: Not Supported 00:12:29.833 Discovery Log Change Notices: Not Supported 00:12:29.833 Controller Attributes 00:12:29.833 128-bit Host Identifier: Supported 00:12:29.833 Non-Operational Permissive Mode: Not Supported 00:12:29.833 NVM Sets: Not Supported 00:12:29.833 Read Recovery Levels: Not Supported 00:12:29.833 Endurance Groups: Not Supported 00:12:29.833 Predictable Latency Mode: Not Supported 00:12:29.833 Traffic Based Keep ALive: Not Supported 00:12:29.833 Namespace Granularity: Not Supported 00:12:29.833 SQ Associations: Not Supported 00:12:29.833 UUID List: Not Supported 00:12:29.833 Multi-Domain Subsystem: Not Supported 00:12:29.833 Fixed Capacity Management: Not Supported 00:12:29.833 Variable Capacity Management: Not Supported 00:12:29.833 Delete Endurance Group: Not Supported 00:12:29.833 Delete NVM Set: Not Supported 00:12:29.833 Extended LBA Formats Supported: Not Supported 00:12:29.833 Flexible Data Placement Supported: Not Supported 00:12:29.833 00:12:29.833 Controller Memory Buffer Support 00:12:29.833 ================================ 00:12:29.833 Supported: No 00:12:29.833 00:12:29.833 Persistent Memory Region Support 00:12:29.833 ================================ 00:12:29.833 Supported: No 00:12:29.834 00:12:29.834 Admin Command Set Attributes 00:12:29.834 ============================ 00:12:29.834 Security Send/Receive: Not Supported 00:12:29.834 Format NVM: Not Supported 00:12:29.834 Firmware Activate/Download: Not Supported 00:12:29.834 Namespace Management: Not Supported 00:12:29.834 Device Self-Test: Not Supported 00:12:29.834 Directives: Not Supported 00:12:29.834 NVMe-MI: Not Supported 00:12:29.834 Virtualization Management: Not Supported 00:12:29.834 Doorbell Buffer Config: Not Supported 00:12:29.834 Get LBA Status Capability: Not Supported 00:12:29.834 Command & Feature Lockdown Capability: Not Supported 00:12:29.834 Abort Command Limit: 4 00:12:29.834 Async Event Request Limit: 4 00:12:29.834 Number of Firmware Slots: N/A 00:12:29.834 Firmware Slot 1 Read-Only: N/A 00:12:29.834 Firmware Activation Without Reset: N/A 00:12:29.834 Multiple Update Detection Support: N/A 00:12:29.834 Firmware Update Granularity: No Information Provided 00:12:29.834 Per-Namespace SMART Log: No 00:12:29.834 Asymmetric Namespace Access Log Page: Not Supported 00:12:29.834 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:29.834 Command Effects Log Page: Supported 00:12:29.834 Get Log Page Extended Data: Supported 00:12:29.834 Telemetry Log Pages: Not Supported 00:12:29.834 Persistent Event Log Pages: Not Supported 00:12:29.834 Supported Log Pages Log Page: May Support 00:12:29.834 Commands Supported & Effects Log Page: Not Supported 00:12:29.834 Feature Identifiers & Effects Log Page:May Support 00:12:29.834 NVMe-MI Commands & Effects Log Page: May Support 00:12:29.834 Data Area 4 for Telemetry Log: Not Supported 00:12:29.834 Error Log Page Entries Supported: 128 00:12:29.834 Keep Alive: Supported 00:12:29.834 Keep Alive Granularity: 10000 ms 00:12:29.834 00:12:29.834 NVM Command Set Attributes 00:12:29.834 ========================== 00:12:29.834 Submission Queue Entry Size 00:12:29.834 Max: 64 00:12:29.834 Min: 64 00:12:29.834 Completion Queue Entry Size 00:12:29.834 Max: 16 00:12:29.834 Min: 16 00:12:29.834 Number of Namespaces: 32 00:12:29.834 Compare Command: Supported 00:12:29.834 Write Uncorrectable Command: Not Supported 00:12:29.834 Dataset Management Command: Supported 00:12:29.834 Write Zeroes Command: Supported 00:12:29.834 Set Features Save Field: Not Supported 00:12:29.834 Reservations: Not Supported 00:12:29.834 Timestamp: Not Supported 00:12:29.834 Copy: Supported 00:12:29.834 Volatile Write Cache: Present 00:12:29.834 Atomic Write Unit (Normal): 1 00:12:29.834 Atomic Write Unit (PFail): 1 00:12:29.834 Atomic Compare & Write Unit: 1 00:12:29.834 Fused Compare & Write: Supported 00:12:29.834 Scatter-Gather List 00:12:29.834 SGL Command Set: Supported (Dword aligned) 00:12:29.834 SGL Keyed: Not Supported 00:12:29.834 SGL Bit Bucket Descriptor: Not Supported 00:12:29.834 SGL Metadata Pointer: Not Supported 00:12:29.834 Oversized SGL: Not Supported 00:12:29.834 SGL Metadata Address: Not Supported 00:12:29.834 SGL Offset: Not Supported 00:12:29.834 Transport SGL Data Block: Not Supported 00:12:29.834 Replay Protected Memory Block: Not Supported 00:12:29.834 00:12:29.834 Firmware Slot Information 00:12:29.834 ========================= 00:12:29.834 Active slot: 1 00:12:29.834 Slot 1 Firmware Revision: 24.05 00:12:29.834 00:12:29.834 00:12:29.834 Commands Supported and Effects 00:12:29.834 ============================== 00:12:29.834 Admin Commands 00:12:29.834 -------------- 00:12:29.834 Get Log Page (02h): Supported 00:12:29.834 Identify (06h): Supported 00:12:29.834 Abort (08h): Supported 00:12:29.834 Set Features (09h): Supported 00:12:29.834 Get Features (0Ah): Supported 00:12:29.834 Asynchronous Event Request (0Ch): Supported 00:12:29.834 Keep Alive (18h): Supported 00:12:29.834 I/O Commands 00:12:29.834 ------------ 00:12:29.834 Flush (00h): Supported LBA-Change 00:12:29.834 Write (01h): Supported LBA-Change 00:12:29.834 Read (02h): Supported 00:12:29.834 Compare (05h): Supported 00:12:29.834 Write Zeroes (08h): Supported LBA-Change 00:12:29.834 Dataset Management (09h): Supported LBA-Change 00:12:29.834 Copy (19h): Supported LBA-Change 00:12:29.834 Unknown (79h): Supported LBA-Change 00:12:29.834 Unknown (7Ah): Supported 00:12:29.834 00:12:29.834 Error Log 00:12:29.834 ========= 00:12:29.834 00:12:29.834 Arbitration 00:12:29.834 =========== 00:12:29.834 Arbitration Burst: 1 00:12:29.834 00:12:29.834 Power Management 00:12:29.834 ================ 00:12:29.834 Number of Power States: 1 00:12:29.834 Current Power State: Power State #0 00:12:29.834 Power State #0: 00:12:29.834 Max Power: 0.00 W 00:12:29.834 Non-Operational State: Operational 00:12:29.834 Entry Latency: Not Reported 00:12:29.834 Exit Latency: Not Reported 00:12:29.834 Relative Read Throughput: 0 00:12:29.834 Relative Read Latency: 0 00:12:29.834 Relative Write Throughput: 0 00:12:29.834 Relative Write Latency: 0 00:12:29.834 Idle Power: Not Reported 00:12:29.834 Active Power: Not Reported 00:12:29.834 Non-Operational Permissive Mode: Not Supported 00:12:29.834 00:12:29.834 Health Information 00:12:29.834 ================== 00:12:29.834 Critical Warnings: 00:12:29.834 Available Spare Space: OK 00:12:29.834 Temperature: OK 00:12:29.834 Device Reliability: OK 00:12:29.834 Read Only: No 00:12:29.834 Volatile Memory Backup: OK 00:12:29.834 Current Temperature: 0 Kelvin (-2[2024-05-15 01:15:05.306330] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:29.834 [2024-05-15 01:15:05.314200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:29.834 [2024-05-15 01:15:05.314229] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:29.834 [2024-05-15 01:15:05.314239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.834 [2024-05-15 01:15:05.314247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.834 [2024-05-15 01:15:05.314255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.834 [2024-05-15 01:15:05.314263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.834 [2024-05-15 01:15:05.314314] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.834 [2024-05-15 01:15:05.314327] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:29.834 [2024-05-15 01:15:05.315318] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.834 [2024-05-15 01:15:05.315364] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:29.834 [2024-05-15 01:15:05.315373] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:29.834 [2024-05-15 01:15:05.316325] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:29.834 [2024-05-15 01:15:05.316339] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:29.834 [2024-05-15 01:15:05.316386] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:29.834 [2024-05-15 01:15:05.319198] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.834 73 Celsius) 00:12:29.834 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:29.834 Available Spare: 0% 00:12:29.834 Available Spare Threshold: 0% 00:12:29.834 Life Percentage Used: 0% 00:12:29.834 Data Units Read: 0 00:12:29.834 Data Units Written: 0 00:12:29.834 Host Read Commands: 0 00:12:29.834 Host Write Commands: 0 00:12:29.834 Controller Busy Time: 0 minutes 00:12:29.834 Power Cycles: 0 00:12:29.834 Power On Hours: 0 hours 00:12:29.834 Unsafe Shutdowns: 0 00:12:29.834 Unrecoverable Media Errors: 0 00:12:29.834 Lifetime Error Log Entries: 0 00:12:29.834 Warning Temperature Time: 0 minutes 00:12:29.834 Critical Temperature Time: 0 minutes 00:12:29.834 00:12:29.834 Number of Queues 00:12:29.834 ================ 00:12:29.834 Number of I/O Submission Queues: 127 00:12:29.834 Number of I/O Completion Queues: 127 00:12:29.834 00:12:29.834 Active Namespaces 00:12:29.834 ================= 00:12:29.834 Namespace ID:1 00:12:29.834 Error Recovery Timeout: Unlimited 00:12:29.834 Command Set Identifier: NVM (00h) 00:12:29.834 Deallocate: Supported 00:12:29.834 Deallocated/Unwritten Error: Not Supported 00:12:29.834 Deallocated Read Value: Unknown 00:12:29.834 Deallocate in Write Zeroes: Not Supported 00:12:29.834 Deallocated Guard Field: 0xFFFF 00:12:29.834 Flush: Supported 00:12:29.834 Reservation: Supported 00:12:29.834 Namespace Sharing Capabilities: Multiple Controllers 00:12:29.834 Size (in LBAs): 131072 (0GiB) 00:12:29.834 Capacity (in LBAs): 131072 (0GiB) 00:12:29.834 Utilization (in LBAs): 131072 (0GiB) 00:12:29.834 NGUID: B87762E146734EB285F4D1D2AE197074 00:12:29.834 UUID: b87762e1-4673-4eb2-85f4-d1d2ae197074 00:12:29.835 Thin Provisioning: Not Supported 00:12:29.835 Per-NS Atomic Units: Yes 00:12:29.835 Atomic Boundary Size (Normal): 0 00:12:29.835 Atomic Boundary Size (PFail): 0 00:12:29.835 Atomic Boundary Offset: 0 00:12:29.835 Maximum Single Source Range Length: 65535 00:12:29.835 Maximum Copy Length: 65535 00:12:29.835 Maximum Source Range Count: 1 00:12:29.835 NGUID/EUI64 Never Reused: No 00:12:29.835 Namespace Write Protected: No 00:12:29.835 Number of LBA Formats: 1 00:12:29.835 Current LBA Format: LBA Format #00 00:12:29.835 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:29.835 00:12:29.835 01:15:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:29.835 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.094 [2024-05-15 01:15:05.530229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.365 Initializing NVMe Controllers 00:12:35.365 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.365 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:35.365 Initialization complete. Launching workers. 00:12:35.365 ======================================================== 00:12:35.365 Latency(us) 00:12:35.365 Device Information : IOPS MiB/s Average min max 00:12:35.365 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39968.15 156.13 3202.39 908.94 6762.81 00:12:35.365 ======================================================== 00:12:35.365 Total : 39968.15 156.13 3202.39 908.94 6762.81 00:12:35.365 00:12:35.365 [2024-05-15 01:15:10.636453] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.365 01:15:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:35.365 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.365 [2024-05-15 01:15:10.862135] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.649 Initializing NVMe Controllers 00:12:40.649 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:40.649 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:40.649 Initialization complete. Launching workers. 00:12:40.649 ======================================================== 00:12:40.649 Latency(us) 00:12:40.649 Device Information : IOPS MiB/s Average min max 00:12:40.649 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39937.39 156.01 3204.84 925.44 8611.87 00:12:40.649 ======================================================== 00:12:40.649 Total : 39937.39 156.01 3204.84 925.44 8611.87 00:12:40.649 00:12:40.649 [2024-05-15 01:15:15.882946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.649 01:15:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:40.649 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.649 [2024-05-15 01:15:16.097026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.948 [2024-05-15 01:15:21.232293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.948 Initializing NVMe Controllers 00:12:45.948 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:45.948 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:45.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:45.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:45.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:45.948 Initialization complete. Launching workers. 00:12:45.948 Starting thread on core 2 00:12:45.948 Starting thread on core 3 00:12:45.948 Starting thread on core 1 00:12:45.948 01:15:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:45.948 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.948 [2024-05-15 01:15:21.530805] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.234 [2024-05-15 01:15:24.585110] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.234 Initializing NVMe Controllers 00:12:49.234 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.234 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:49.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:49.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:49.234 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:49.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:49.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:49.234 Initialization complete. Launching workers. 00:12:49.234 Starting thread on core 1 with urgent priority queue 00:12:49.234 Starting thread on core 2 with urgent priority queue 00:12:49.234 Starting thread on core 3 with urgent priority queue 00:12:49.234 Starting thread on core 0 with urgent priority queue 00:12:49.234 SPDK bdev Controller (SPDK2 ) core 0: 6125.33 IO/s 16.33 secs/100000 ios 00:12:49.234 SPDK bdev Controller (SPDK2 ) core 1: 6394.67 IO/s 15.64 secs/100000 ios 00:12:49.234 SPDK bdev Controller (SPDK2 ) core 2: 5651.33 IO/s 17.69 secs/100000 ios 00:12:49.234 SPDK bdev Controller (SPDK2 ) core 3: 7850.67 IO/s 12.74 secs/100000 ios 00:12:49.234 ======================================================== 00:12:49.234 00:12:49.234 01:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.234 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.234 [2024-05-15 01:15:24.874673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:49.234 Initializing NVMe Controllers 00:12:49.234 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.234 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:49.234 Namespace ID: 1 size: 0GB 00:12:49.234 Initialization complete. 00:12:49.234 INFO: using host memory buffer for IO 00:12:49.234 Hello world! 00:12:49.234 [2024-05-15 01:15:24.886742] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:49.234 01:15:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:49.493 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.493 [2024-05-15 01:15:25.169729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.868 Initializing NVMe Controllers 00:12:50.868 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.868 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.868 Initialization complete. Launching workers. 00:12:50.868 submit (in ns) avg, min, max = 7091.8, 3106.4, 4000385.6 00:12:50.868 complete (in ns) avg, min, max = 18301.9, 1714.4, 3999467.2 00:12:50.868 00:12:50.868 Submit histogram 00:12:50.868 ================ 00:12:50.868 Range in us Cumulative Count 00:12:50.868 3.098 - 3.110: 0.0237% ( 4) 00:12:50.868 3.110 - 3.123: 0.3018% ( 47) 00:12:50.868 3.123 - 3.136: 1.1479% ( 143) 00:12:50.868 3.136 - 3.149: 2.7750% ( 275) 00:12:50.868 3.149 - 3.162: 5.4671% ( 455) 00:12:50.868 3.162 - 3.174: 9.1533% ( 623) 00:12:50.868 3.174 - 3.187: 13.8394% ( 792) 00:12:50.868 3.187 - 3.200: 19.5136% ( 959) 00:12:50.868 3.200 - 3.213: 25.8091% ( 1064) 00:12:50.868 3.213 - 3.226: 31.7969% ( 1012) 00:12:50.868 3.226 - 3.238: 37.8380% ( 1021) 00:12:50.868 3.238 - 3.251: 44.2341% ( 1081) 00:12:50.868 3.251 - 3.264: 50.2455% ( 1016) 00:12:50.868 3.264 - 3.277: 54.5530% ( 728) 00:12:50.868 3.277 - 3.302: 60.6887% ( 1037) 00:12:50.868 3.302 - 3.328: 66.0493% ( 906) 00:12:50.868 3.328 - 3.354: 71.2029% ( 871) 00:12:50.868 3.354 - 3.379: 78.4806% ( 1230) 00:12:50.868 3.379 - 3.405: 84.6163% ( 1037) 00:12:50.868 3.405 - 3.430: 87.0244% ( 407) 00:12:50.868 3.430 - 3.456: 88.2019% ( 199) 00:12:50.868 3.456 - 3.482: 89.1663% ( 163) 00:12:50.868 3.482 - 3.507: 90.4917% ( 224) 00:12:50.868 3.507 - 3.533: 92.3377% ( 312) 00:12:50.868 3.533 - 3.558: 93.9708% ( 276) 00:12:50.868 3.558 - 3.584: 95.1364% ( 197) 00:12:50.868 3.584 - 3.610: 96.2428% ( 187) 00:12:50.868 3.610 - 3.635: 97.3493% ( 187) 00:12:50.868 3.635 - 3.661: 98.2782% ( 157) 00:12:50.868 3.661 - 3.686: 98.8226% ( 92) 00:12:50.868 3.686 - 3.712: 99.1480% ( 55) 00:12:50.868 3.712 - 3.738: 99.4142% ( 45) 00:12:50.868 3.738 - 3.763: 99.5444% ( 22) 00:12:50.868 3.763 - 3.789: 99.6154% ( 12) 00:12:50.868 3.789 - 3.814: 99.6332% ( 3) 00:12:50.868 3.814 - 3.840: 99.6509% ( 3) 00:12:50.868 3.891 - 3.917: 99.6568% ( 1) 00:12:50.868 5.325 - 5.350: 99.6627% ( 1) 00:12:50.868 5.658 - 5.683: 99.6687% ( 1) 00:12:50.868 5.683 - 5.709: 99.6746% ( 1) 00:12:50.868 5.709 - 5.734: 99.6864% ( 2) 00:12:50.868 5.914 - 5.939: 99.6923% ( 1) 00:12:50.868 5.939 - 5.965: 99.7042% ( 2) 00:12:50.868 6.067 - 6.093: 99.7101% ( 1) 00:12:50.868 6.118 - 6.144: 99.7160% ( 1) 00:12:50.868 6.144 - 6.170: 99.7219% ( 1) 00:12:50.868 6.195 - 6.221: 99.7278% ( 1) 00:12:50.868 6.221 - 6.246: 99.7337% ( 1) 00:12:50.868 6.349 - 6.374: 99.7397% ( 1) 00:12:50.868 6.451 - 6.477: 99.7456% ( 1) 00:12:50.868 6.477 - 6.502: 99.7574% ( 2) 00:12:50.868 6.605 - 6.656: 99.7633% ( 1) 00:12:50.868 6.656 - 6.707: 99.7692% ( 1) 00:12:50.868 6.758 - 6.810: 99.7752% ( 1) 00:12:50.868 6.810 - 6.861: 99.7870% ( 2) 00:12:50.868 6.861 - 6.912: 99.8107% ( 4) 00:12:50.868 6.912 - 6.963: 99.8343% ( 4) 00:12:50.868 7.066 - 7.117: 99.8402% ( 1) 00:12:50.868 7.117 - 7.168: 99.8521% ( 2) 00:12:50.868 7.168 - 7.219: 99.8580% ( 1) 00:12:50.868 7.322 - 7.373: 99.8698% ( 2) 00:12:50.868 7.373 - 7.424: 99.8757% ( 1) 00:12:50.868 7.475 - 7.526: 99.8817% ( 1) 00:12:50.868 7.526 - 7.578: 99.8876% ( 1) 00:12:50.868 7.731 - 7.782: 99.8935% ( 1) 00:12:50.868 8.038 - 8.090: 99.8994% ( 1) 00:12:50.868 14.336 - 14.438: 99.9053% ( 1) 00:12:50.868 3984.589 - 4010.803: 100.0000% ( 16) 00:12:50.868 00:12:50.868 Complete histogram 00:12:50.868 ================== 00:12:50.868 Range in us Cumulative Count 00:12:50.868 1.702 - 1.715: 0.0059% ( 1) 00:12:50.868 1.715 - 1.728: 1.7810% ( 300) 00:12:50.868 1.728 - 1.741: 26.1523% ( 4119) 00:12:50.868 1.741 - 1.754: 40.3645% ( 2402) 00:12:50.868 1.754 - 1.766: 43.9678% ( 609) 00:12:50.868 1.766 - 1.779: 46.8848% ( 493) 00:12:50.869 1.779 - 1.792: 62.2803% ( 2602) 00:12:50.869 1.792 - 1.805: 89.7403% ( 4641) 00:12:50.869 1.805 - 1.818: 94.5625% ( 815) 00:12:50.869 1.818 - [2024-05-15 01:15:26.270071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.869 1.830: 97.2842% ( 460) 00:12:50.869 1.830 - 1.843: 98.2605% ( 165) 00:12:50.869 1.843 - 1.856: 98.4971% ( 40) 00:12:50.869 1.856 - 1.869: 98.7752% ( 47) 00:12:50.869 1.869 - 1.882: 98.9113% ( 23) 00:12:50.869 1.882 - 1.894: 98.9764% ( 11) 00:12:50.869 1.894 - 1.907: 99.0178% ( 7) 00:12:50.869 1.907 - 1.920: 99.0592% ( 7) 00:12:50.869 1.920 - 1.933: 99.1006% ( 7) 00:12:50.869 1.933 - 1.946: 99.1243% ( 4) 00:12:50.869 1.946 - 1.958: 99.2131% ( 15) 00:12:50.869 1.958 - 1.971: 99.3137% ( 17) 00:12:50.869 1.971 - 1.984: 99.3432% ( 5) 00:12:50.869 1.984 - 1.997: 99.3669% ( 4) 00:12:50.869 2.010 - 2.022: 99.3728% ( 1) 00:12:50.869 2.061 - 2.074: 99.3787% ( 1) 00:12:50.869 2.112 - 2.125: 99.3847% ( 1) 00:12:50.869 2.214 - 2.227: 99.3906% ( 1) 00:12:50.869 3.891 - 3.917: 99.3965% ( 1) 00:12:50.869 3.942 - 3.968: 99.4024% ( 1) 00:12:50.869 4.224 - 4.250: 99.4083% ( 1) 00:12:50.869 4.301 - 4.326: 99.4142% ( 1) 00:12:50.869 4.429 - 4.454: 99.4202% ( 1) 00:12:50.869 4.506 - 4.531: 99.4261% ( 1) 00:12:50.869 4.608 - 4.634: 99.4320% ( 1) 00:12:50.869 4.685 - 4.710: 99.4379% ( 1) 00:12:50.869 4.813 - 4.838: 99.4438% ( 1) 00:12:50.869 4.864 - 4.890: 99.4557% ( 2) 00:12:50.869 4.966 - 4.992: 99.4616% ( 1) 00:12:50.869 5.069 - 5.094: 99.4675% ( 1) 00:12:50.869 5.146 - 5.171: 99.4734% ( 1) 00:12:50.869 5.197 - 5.222: 99.4793% ( 1) 00:12:50.869 5.274 - 5.299: 99.4852% ( 1) 00:12:50.869 5.325 - 5.350: 99.4912% ( 1) 00:12:50.869 5.504 - 5.530: 99.5030% ( 2) 00:12:50.869 5.530 - 5.555: 99.5089% ( 1) 00:12:50.869 5.555 - 5.581: 99.5148% ( 1) 00:12:50.869 5.760 - 5.786: 99.5207% ( 1) 00:12:50.869 5.862 - 5.888: 99.5267% ( 1) 00:12:50.869 6.067 - 6.093: 99.5326% ( 1) 00:12:50.869 6.246 - 6.272: 99.5385% ( 1) 00:12:50.869 6.298 - 6.323: 99.5444% ( 1) 00:12:50.869 6.374 - 6.400: 99.5503% ( 1) 00:12:50.869 6.554 - 6.605: 99.5562% ( 1) 00:12:50.869 9.933 - 9.984: 99.5622% ( 1) 00:12:50.869 11.315 - 11.366: 99.5681% ( 1) 00:12:50.869 11.469 - 11.520: 99.5740% ( 1) 00:12:50.869 12.595 - 12.646: 99.5799% ( 1) 00:12:50.869 14.541 - 14.643: 99.5858% ( 1) 00:12:50.869 3643.802 - 3670.016: 99.5917% ( 1) 00:12:50.869 3905.946 - 3932.160: 99.5977% ( 1) 00:12:50.869 3984.589 - 4010.803: 100.0000% ( 68) 00:12:50.869 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:50.869 [ 00:12:50.869 { 00:12:50.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.869 "subtype": "Discovery", 00:12:50.869 "listen_addresses": [], 00:12:50.869 "allow_any_host": true, 00:12:50.869 "hosts": [] 00:12:50.869 }, 00:12:50.869 { 00:12:50.869 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:50.869 "subtype": "NVMe", 00:12:50.869 "listen_addresses": [ 00:12:50.869 { 00:12:50.869 "trtype": "VFIOUSER", 00:12:50.869 "adrfam": "IPv4", 00:12:50.869 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:50.869 "trsvcid": "0" 00:12:50.869 } 00:12:50.869 ], 00:12:50.869 "allow_any_host": true, 00:12:50.869 "hosts": [], 00:12:50.869 "serial_number": "SPDK1", 00:12:50.869 "model_number": "SPDK bdev Controller", 00:12:50.869 "max_namespaces": 32, 00:12:50.869 "min_cntlid": 1, 00:12:50.869 "max_cntlid": 65519, 00:12:50.869 "namespaces": [ 00:12:50.869 { 00:12:50.869 "nsid": 1, 00:12:50.869 "bdev_name": "Malloc1", 00:12:50.869 "name": "Malloc1", 00:12:50.869 "nguid": "DFB1867683A448AD813EED7A304A7291", 00:12:50.869 "uuid": "dfb18676-83a4-48ad-813e-ed7a304a7291" 00:12:50.869 }, 00:12:50.869 { 00:12:50.869 "nsid": 2, 00:12:50.869 "bdev_name": "Malloc3", 00:12:50.869 "name": "Malloc3", 00:12:50.869 "nguid": "0B1FB765F2CC4BD9A92C221BC0616A85", 00:12:50.869 "uuid": "0b1fb765-f2cc-4bd9-a92c-221bc0616a85" 00:12:50.869 } 00:12:50.869 ] 00:12:50.869 }, 00:12:50.869 { 00:12:50.869 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:50.869 "subtype": "NVMe", 00:12:50.869 "listen_addresses": [ 00:12:50.869 { 00:12:50.869 "trtype": "VFIOUSER", 00:12:50.869 "adrfam": "IPv4", 00:12:50.869 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:50.869 "trsvcid": "0" 00:12:50.869 } 00:12:50.869 ], 00:12:50.869 "allow_any_host": true, 00:12:50.869 "hosts": [], 00:12:50.869 "serial_number": "SPDK2", 00:12:50.869 "model_number": "SPDK bdev Controller", 00:12:50.869 "max_namespaces": 32, 00:12:50.869 "min_cntlid": 1, 00:12:50.869 "max_cntlid": 65519, 00:12:50.869 "namespaces": [ 00:12:50.869 { 00:12:50.869 "nsid": 1, 00:12:50.869 "bdev_name": "Malloc2", 00:12:50.869 "name": "Malloc2", 00:12:50.869 "nguid": "B87762E146734EB285F4D1D2AE197074", 00:12:50.869 "uuid": "b87762e1-4673-4eb2-85f4-d1d2ae197074" 00:12:50.869 } 00:12:50.869 ] 00:12:50.869 } 00:12:50.869 ] 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=4032596 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:50.869 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:50.869 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.128 [2024-05-15 01:15:26.659590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.128 Malloc4 00:12:51.128 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:51.386 [2024-05-15 01:15:26.831837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.386 01:15:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.386 Asynchronous Event Request test 00:12:51.386 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.386 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.386 Registering asynchronous event callbacks... 00:12:51.386 Starting namespace attribute notice tests for all controllers... 00:12:51.386 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:51.386 aer_cb - Changed Namespace 00:12:51.386 Cleaning up... 00:12:51.386 [ 00:12:51.386 { 00:12:51.386 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.386 "subtype": "Discovery", 00:12:51.386 "listen_addresses": [], 00:12:51.386 "allow_any_host": true, 00:12:51.386 "hosts": [] 00:12:51.386 }, 00:12:51.386 { 00:12:51.386 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.386 "subtype": "NVMe", 00:12:51.386 "listen_addresses": [ 00:12:51.386 { 00:12:51.386 "trtype": "VFIOUSER", 00:12:51.386 "adrfam": "IPv4", 00:12:51.386 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.386 "trsvcid": "0" 00:12:51.386 } 00:12:51.386 ], 00:12:51.386 "allow_any_host": true, 00:12:51.386 "hosts": [], 00:12:51.386 "serial_number": "SPDK1", 00:12:51.386 "model_number": "SPDK bdev Controller", 00:12:51.386 "max_namespaces": 32, 00:12:51.386 "min_cntlid": 1, 00:12:51.386 "max_cntlid": 65519, 00:12:51.386 "namespaces": [ 00:12:51.386 { 00:12:51.386 "nsid": 1, 00:12:51.386 "bdev_name": "Malloc1", 00:12:51.386 "name": "Malloc1", 00:12:51.386 "nguid": "DFB1867683A448AD813EED7A304A7291", 00:12:51.386 "uuid": "dfb18676-83a4-48ad-813e-ed7a304a7291" 00:12:51.386 }, 00:12:51.386 { 00:12:51.386 "nsid": 2, 00:12:51.386 "bdev_name": "Malloc3", 00:12:51.386 "name": "Malloc3", 00:12:51.386 "nguid": "0B1FB765F2CC4BD9A92C221BC0616A85", 00:12:51.386 "uuid": "0b1fb765-f2cc-4bd9-a92c-221bc0616a85" 00:12:51.386 } 00:12:51.386 ] 00:12:51.386 }, 00:12:51.386 { 00:12:51.386 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.386 "subtype": "NVMe", 00:12:51.386 "listen_addresses": [ 00:12:51.386 { 00:12:51.386 "trtype": "VFIOUSER", 00:12:51.386 "adrfam": "IPv4", 00:12:51.386 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.386 "trsvcid": "0" 00:12:51.386 } 00:12:51.386 ], 00:12:51.386 "allow_any_host": true, 00:12:51.386 "hosts": [], 00:12:51.386 "serial_number": "SPDK2", 00:12:51.386 "model_number": "SPDK bdev Controller", 00:12:51.386 "max_namespaces": 32, 00:12:51.386 "min_cntlid": 1, 00:12:51.386 "max_cntlid": 65519, 00:12:51.386 "namespaces": [ 00:12:51.386 { 00:12:51.386 "nsid": 1, 00:12:51.386 "bdev_name": "Malloc2", 00:12:51.386 "name": "Malloc2", 00:12:51.386 "nguid": "B87762E146734EB285F4D1D2AE197074", 00:12:51.386 "uuid": "b87762e1-4673-4eb2-85f4-d1d2ae197074" 00:12:51.386 }, 00:12:51.386 { 00:12:51.386 "nsid": 2, 00:12:51.386 "bdev_name": "Malloc4", 00:12:51.386 "name": "Malloc4", 00:12:51.386 "nguid": "D9E57D4FED6C42A89A5D8F064376EEB2", 00:12:51.386 "uuid": "d9e57d4f-ed6c-42a8-9a5d-8f064376eeb2" 00:12:51.386 } 00:12:51.386 ] 00:12:51.386 } 00:12:51.386 ] 00:12:51.386 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 4032596 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4024008 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 4024008 ']' 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 4024008 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:51.387 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4024008 00:12:51.645 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:51.645 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:51.645 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4024008' 00:12:51.645 killing process with pid 4024008 00:12:51.645 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 4024008 00:12:51.645 [2024-05-15 01:15:27.090889] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:51.645 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 4024008 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=4032804 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 4032804' 00:12:51.905 Process pid: 4032804 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 4032804 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 4032804 ']' 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.905 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.906 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.906 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.906 01:15:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:51.906 [2024-05-15 01:15:27.423386] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:51.906 [2024-05-15 01:15:27.424264] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:12:51.906 [2024-05-15 01:15:27.424302] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.906 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.906 [2024-05-15 01:15:27.491786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.906 [2024-05-15 01:15:27.558623] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.906 [2024-05-15 01:15:27.558666] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.906 [2024-05-15 01:15:27.558675] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.906 [2024-05-15 01:15:27.558683] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.906 [2024-05-15 01:15:27.558705] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.906 [2024-05-15 01:15:27.558762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.906 [2024-05-15 01:15:27.558856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.906 [2024-05-15 01:15:27.558943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.906 [2024-05-15 01:15:27.558944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.171 [2024-05-15 01:15:27.635478] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:52.171 [2024-05-15 01:15:27.635606] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:52.171 [2024-05-15 01:15:27.635824] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:52.171 [2024-05-15 01:15:27.636179] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:52.171 [2024-05-15 01:15:27.636444] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:52.735 01:15:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.735 01:15:28 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:52.735 01:15:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:53.673 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:53.931 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:53.931 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:53.931 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:53.931 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:53.931 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:53.931 Malloc1 00:12:53.931 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:54.189 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:54.447 01:15:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:54.447 [2024-05-15 01:15:30.135382] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:54.707 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.707 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:54.707 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:54.707 Malloc2 00:12:54.707 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:54.965 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:55.224 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:55.224 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:55.224 01:15:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 4032804 00:12:55.224 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 4032804 ']' 00:12:55.224 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 4032804 00:12:55.224 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4032804 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4032804' 00:12:55.483 killing process with pid 4032804 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 4032804 00:12:55.483 [2024-05-15 01:15:30.966551] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:55.483 01:15:30 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 4032804 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:55.743 00:12:55.743 real 0m51.736s 00:12:55.743 user 3m23.381s 00:12:55.743 sys 0m4.835s 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:55.743 ************************************ 00:12:55.743 END TEST nvmf_vfio_user 00:12:55.743 ************************************ 00:12:55.743 01:15:31 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:55.743 01:15:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:55.743 01:15:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:55.743 01:15:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.743 ************************************ 00:12:55.743 START TEST nvmf_vfio_user_nvme_compliance 00:12:55.743 ************************************ 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:55.743 * Looking for test storage... 00:12:55.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.743 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:55.744 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=4033480 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 4033480' 00:12:56.003 Process pid: 4033480 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 4033480 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 4033480 ']' 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:56.003 01:15:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:56.003 [2024-05-15 01:15:31.484369] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:12:56.003 [2024-05-15 01:15:31.484428] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.003 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.003 [2024-05-15 01:15:31.554065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:56.003 [2024-05-15 01:15:31.627588] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.003 [2024-05-15 01:15:31.627624] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.003 [2024-05-15 01:15:31.627633] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.003 [2024-05-15 01:15:31.627642] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.003 [2024-05-15 01:15:31.627649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.003 [2024-05-15 01:15:31.627695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.003 [2024-05-15 01:15:31.627791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.003 [2024-05-15 01:15:31.627794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.941 01:15:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:56.941 01:15:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:12:56.941 01:15:32 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 malloc0 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.880 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.881 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:57.881 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:57.881 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.881 [2024-05-15 01:15:33.372127] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:57.881 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:57.881 01:15:33 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:57.881 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.881 00:12:57.881 00:12:57.881 CUnit - A unit testing framework for C - Version 2.1-3 00:12:57.881 http://cunit.sourceforge.net/ 00:12:57.881 00:12:57.881 00:12:57.881 Suite: nvme_compliance 00:12:57.881 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 01:15:33.547651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:57.881 [2024-05-15 01:15:33.549002] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:57.881 [2024-05-15 01:15:33.549018] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:57.881 [2024-05-15 01:15:33.549026] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:57.881 [2024-05-15 01:15:33.552686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.140 passed 00:12:58.140 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 01:15:33.628237] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.140 [2024-05-15 01:15:33.631258] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.140 passed 00:12:58.140 Test: admin_identify_ns ...[2024-05-15 01:15:33.710887] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.140 [2024-05-15 01:15:33.771201] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:58.140 [2024-05-15 01:15:33.779203] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:58.140 [2024-05-15 01:15:33.799301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.140 passed 00:12:58.399 Test: admin_get_features_mandatory_features ...[2024-05-15 01:15:33.871589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.399 [2024-05-15 01:15:33.874610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.399 passed 00:12:58.399 Test: admin_get_features_optional_features ...[2024-05-15 01:15:33.950102] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.399 [2024-05-15 01:15:33.956151] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.399 passed 00:12:58.399 Test: admin_set_features_number_of_queues ...[2024-05-15 01:15:34.030227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.658 [2024-05-15 01:15:34.137289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.658 passed 00:12:58.658 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 01:15:34.209678] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.658 [2024-05-15 01:15:34.212699] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.658 passed 00:12:58.658 Test: admin_get_log_page_with_lpo ...[2024-05-15 01:15:34.288149] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.916 [2024-05-15 01:15:34.357200] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:58.916 [2024-05-15 01:15:34.370250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.916 passed 00:12:58.916 Test: fabric_property_get ...[2024-05-15 01:15:34.443509] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.917 [2024-05-15 01:15:34.444738] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:58.917 [2024-05-15 01:15:34.446528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.917 passed 00:12:58.917 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 01:15:34.521009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.917 [2024-05-15 01:15:34.522231] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:58.917 [2024-05-15 01:15:34.524025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.917 passed 00:12:58.917 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 01:15:34.598457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.175 [2024-05-15 01:15:34.686196] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.175 [2024-05-15 01:15:34.702199] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.175 [2024-05-15 01:15:34.707295] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.175 passed 00:12:59.175 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 01:15:34.778661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.175 [2024-05-15 01:15:34.779885] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:12:59.175 [2024-05-15 01:15:34.781679] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.175 passed 00:12:59.175 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 01:15:34.859143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.452 [2024-05-15 01:15:34.932204] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:12:59.452 [2024-05-15 01:15:34.956206] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:12:59.452 [2024-05-15 01:15:34.961284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.452 passed 00:12:59.452 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 01:15:35.035399] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.452 [2024-05-15 01:15:35.036620] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:12:59.452 [2024-05-15 01:15:35.036646] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:12:59.452 [2024-05-15 01:15:35.038421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.452 passed 00:12:59.452 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 01:15:35.112871] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.757 [2024-05-15 01:15:35.204196] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:12:59.757 [2024-05-15 01:15:35.212208] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:12:59.757 [2024-05-15 01:15:35.220200] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:12:59.757 [2024-05-15 01:15:35.228201] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:12:59.757 [2024-05-15 01:15:35.257291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.757 passed 00:12:59.757 Test: admin_create_io_sq_verify_pc ...[2024-05-15 01:15:35.329560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.757 [2024-05-15 01:15:35.347208] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:12:59.757 [2024-05-15 01:15:35.364743] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.757 passed 00:12:59.757 Test: admin_create_io_qp_max_qps ...[2024-05-15 01:15:35.439239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.137 [2024-05-15 01:15:36.557203] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:01.396 [2024-05-15 01:15:36.932817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:01.396 passed 00:13:01.396 Test: admin_create_io_sq_shared_cq ...[2024-05-15 01:15:37.008195] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.656 [2024-05-15 01:15:37.142198] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:01.656 [2024-05-15 01:15:37.179261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:01.656 passed 00:13:01.656 00:13:01.656 Run Summary: Type Total Ran Passed Failed Inactive 00:13:01.656 suites 1 1 n/a 0 0 00:13:01.656 tests 18 18 18 0 0 00:13:01.656 asserts 360 360 360 0 n/a 00:13:01.656 00:13:01.656 Elapsed time = 1.493 seconds 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 4033480 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 4033480 ']' 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 4033480 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4033480 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4033480' 00:13:01.656 killing process with pid 4033480 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 4033480 00:13:01.656 [2024-05-15 01:15:37.276581] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:01.656 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 4033480 00:13:01.916 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:01.916 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:01.916 00:13:01.916 real 0m6.199s 00:13:01.916 user 0m17.431s 00:13:01.916 sys 0m0.712s 00:13:01.916 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.916 01:15:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:01.916 ************************************ 00:13:01.916 END TEST nvmf_vfio_user_nvme_compliance 00:13:01.916 ************************************ 00:13:01.916 01:15:37 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:01.916 01:15:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:01.916 01:15:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.916 01:15:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.916 ************************************ 00:13:01.916 START TEST nvmf_vfio_user_fuzz 00:13:01.916 ************************************ 00:13:01.916 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:02.176 * Looking for test storage... 00:13:02.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.176 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=4034609 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 4034609' 00:13:02.177 Process pid: 4034609 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 4034609 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 4034609 ']' 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:02.177 01:15:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.115 01:15:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:03.115 01:15:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:13:03.115 01:15:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 malloc0 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:04.052 01:15:39 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:36.135 Fuzzing completed. Shutting down the fuzz application 00:13:36.135 00:13:36.135 Dumping successful admin opcodes: 00:13:36.135 8, 9, 10, 24, 00:13:36.135 Dumping successful io opcodes: 00:13:36.135 0, 00:13:36.135 NS: 0x200003a1ef00 I/O qp, Total commands completed: 888149, total successful commands: 3458, random_seed: 2662905216 00:13:36.135 NS: 0x200003a1ef00 admin qp, Total commands completed: 216964, total successful commands: 1744, random_seed: 3450986624 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 4034609 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 4034609 ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 4034609 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4034609 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4034609' 00:13:36.135 killing process with pid 4034609 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 4034609 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 4034609 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:36.135 00:13:36.135 real 0m32.882s 00:13:36.135 user 0m29.375s 00:13:36.135 sys 0m31.438s 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:36.135 01:16:10 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 ************************************ 00:13:36.135 END TEST nvmf_vfio_user_fuzz 00:13:36.135 ************************************ 00:13:36.135 01:16:10 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.135 01:16:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:36.135 01:16:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:36.135 01:16:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:36.135 ************************************ 00:13:36.135 START TEST nvmf_host_management 00:13:36.135 ************************************ 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:36.135 * Looking for test storage... 00:13:36.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.135 01:16:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:41.414 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:41.414 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:41.414 Found net devices under 0000:af:00.0: cvl_0_0 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:41.414 Found net devices under 0000:af:00.1: cvl_0_1 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.414 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:41.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:13:41.415 00:13:41.415 --- 10.0.0.2 ping statistics --- 00:13:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.415 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:13:41.415 00:13:41.415 --- 10.0.0.1 ping statistics --- 00:13:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.415 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=4043478 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 4043478 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 4043478 ']' 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:41.415 01:16:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:41.415 [2024-05-15 01:16:17.022059] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:13:41.415 [2024-05-15 01:16:17.022104] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.415 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.415 [2024-05-15 01:16:17.096979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.675 [2024-05-15 01:16:17.173092] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.675 [2024-05-15 01:16:17.173129] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.675 [2024-05-15 01:16:17.173139] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.675 [2024-05-15 01:16:17.173148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.675 [2024-05-15 01:16:17.173155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.675 [2024-05-15 01:16:17.173261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.675 [2024-05-15 01:16:17.173347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.675 [2024-05-15 01:16:17.173475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.675 [2024-05-15 01:16:17.173476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:42.245 [2024-05-15 01:16:17.869905] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.245 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:42.245 Malloc0 00:13:42.245 [2024-05-15 01:16:17.936242] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:42.505 [2024-05-15 01:16:17.936487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=4043610 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 4043610 /var/tmp/bdevperf.sock 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 4043610 ']' 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:42.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:42.505 { 00:13:42.505 "params": { 00:13:42.505 "name": "Nvme$subsystem", 00:13:42.505 "trtype": "$TEST_TRANSPORT", 00:13:42.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:42.505 "adrfam": "ipv4", 00:13:42.505 "trsvcid": "$NVMF_PORT", 00:13:42.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:42.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:42.505 "hdgst": ${hdgst:-false}, 00:13:42.505 "ddgst": ${ddgst:-false} 00:13:42.505 }, 00:13:42.505 "method": "bdev_nvme_attach_controller" 00:13:42.505 } 00:13:42.505 EOF 00:13:42.505 )") 00:13:42.505 01:16:17 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:42.505 01:16:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:42.505 01:16:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:42.505 01:16:18 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:42.505 "params": { 00:13:42.505 "name": "Nvme0", 00:13:42.505 "trtype": "tcp", 00:13:42.505 "traddr": "10.0.0.2", 00:13:42.505 "adrfam": "ipv4", 00:13:42.505 "trsvcid": "4420", 00:13:42.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:42.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:42.505 "hdgst": false, 00:13:42.505 "ddgst": false 00:13:42.505 }, 00:13:42.505 "method": "bdev_nvme_attach_controller" 00:13:42.505 }' 00:13:42.505 [2024-05-15 01:16:18.038482] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:13:42.505 [2024-05-15 01:16:18.038531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043610 ] 00:13:42.505 EAL: No free 2048 kB hugepages reported on node 1 00:13:42.505 [2024-05-15 01:16:18.109621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.505 [2024-05-15 01:16:18.178557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.111 Running I/O for 10 seconds... 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.399 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:43.399 [2024-05-15 01:16:18.911700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911769] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.911795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2123230 is same with the state(5) to be set 00:13:43.399 [2024-05-15 01:16:18.912594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.399 [2024-05-15 01:16:18.912896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.399 [2024-05-15 01:16:18.912907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.912917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.912928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.912939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.912950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.912962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.912973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.912983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.912995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.400 [2024-05-15 01:16:18.913780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.400 [2024-05-15 01:16:18.913790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.913980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.913991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:43.401 [2024-05-15 01:16:18.914000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.914066] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe54ad0 was disconnected and freed. reset controller. 00:13:43.401 [2024-05-15 01:16:18.914922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:43.401 task offset: 65536 on job bdev=Nvme0n1 fails 00:13:43.401 00:13:43.401 Latency(us) 00:13:43.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.401 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:43.401 Job: Nvme0n1 ended in about 0.41 seconds with error 00:13:43.401 Verification LBA range: start 0x0 length 0x400 00:13:43.401 Nvme0n1 : 0.41 1247.87 77.99 155.98 0.00 44538.07 1756.36 50751.08 00:13:43.401 =================================================================================================================== 00:13:43.401 Total : 1247.87 77.99 155.98 0.00 44538.07 1756.36 50751.08 00:13:43.401 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.401 [2024-05-15 01:16:18.916556] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:43.401 [2024-05-15 01:16:18.916575] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa439f0 (9): Bad file descriptor 00:13:43.401 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:43.401 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.401 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:43.401 [2024-05-15 01:16:18.919525] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:43.401 [2024-05-15 01:16:18.919655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:43.401 [2024-05-15 01:16:18.919684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:43.401 [2024-05-15 01:16:18.919702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:43.401 [2024-05-15 01:16:18.919712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:43.401 [2024-05-15 01:16:18.919723] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:43.401 [2024-05-15 01:16:18.919732] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa439f0 00:13:43.401 [2024-05-15 01:16:18.919755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa439f0 (9): Bad file descriptor 00:13:43.401 [2024-05-15 01:16:18.919770] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:43.401 [2024-05-15 01:16:18.919780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:43.401 [2024-05-15 01:16:18.919791] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:43.401 [2024-05-15 01:16:18.919807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:43.401 01:16:18 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.401 01:16:18 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 4043610 00:13:44.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (4043610) - No such process 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:44.340 { 00:13:44.340 "params": { 00:13:44.340 "name": "Nvme$subsystem", 00:13:44.340 "trtype": "$TEST_TRANSPORT", 00:13:44.340 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:44.340 "adrfam": "ipv4", 00:13:44.340 "trsvcid": "$NVMF_PORT", 00:13:44.340 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:44.340 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:44.340 "hdgst": ${hdgst:-false}, 00:13:44.340 "ddgst": ${ddgst:-false} 00:13:44.340 }, 00:13:44.340 "method": "bdev_nvme_attach_controller" 00:13:44.340 } 00:13:44.340 EOF 00:13:44.340 )") 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:44.340 01:16:19 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:44.340 "params": { 00:13:44.340 "name": "Nvme0", 00:13:44.340 "trtype": "tcp", 00:13:44.340 "traddr": "10.0.0.2", 00:13:44.340 "adrfam": "ipv4", 00:13:44.340 "trsvcid": "4420", 00:13:44.340 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:44.340 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:44.340 "hdgst": false, 00:13:44.340 "ddgst": false 00:13:44.340 }, 00:13:44.340 "method": "bdev_nvme_attach_controller" 00:13:44.340 }' 00:13:44.340 [2024-05-15 01:16:19.980454] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:13:44.340 [2024-05-15 01:16:19.980510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4043996 ] 00:13:44.341 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.600 [2024-05-15 01:16:20.052315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.600 [2024-05-15 01:16:20.131757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.859 Running I/O for 1 seconds... 00:13:45.799 00:13:45.799 Latency(us) 00:13:45.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.800 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:45.800 Verification LBA range: start 0x0 length 0x400 00:13:45.800 Nvme0n1 : 1.01 1268.77 79.30 0.00 0.00 49823.72 9961.47 50331.65 00:13:45.800 =================================================================================================================== 00:13:45.800 Total : 1268.77 79.30 0.00 0.00 49823.72 9961.47 50331.65 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:46.060 rmmod nvme_tcp 00:13:46.060 rmmod nvme_fabrics 00:13:46.060 rmmod nvme_keyring 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 4043478 ']' 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 4043478 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 4043478 ']' 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 4043478 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:46.060 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4043478 00:13:46.320 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:46.320 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:46.320 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4043478' 00:13:46.320 killing process with pid 4043478 00:13:46.320 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 4043478 00:13:46.320 [2024-05-15 01:16:21.793728] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:46.320 01:16:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 4043478 00:13:46.320 [2024-05-15 01:16:21.993904] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.580 01:16:22 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.500 01:16:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:48.500 01:16:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:48.500 00:13:48.500 real 0m13.537s 00:13:48.500 user 0m23.355s 00:13:48.500 sys 0m6.062s 00:13:48.500 01:16:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.500 01:16:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:48.500 ************************************ 00:13:48.500 END TEST nvmf_host_management 00:13:48.500 ************************************ 00:13:48.500 01:16:24 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:48.500 01:16:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:48.500 01:16:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.500 01:16:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.759 ************************************ 00:13:48.759 START TEST nvmf_lvol 00:13:48.759 ************************************ 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:48.759 * Looking for test storage... 00:13:48.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:13:48.759 01:16:24 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.329 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:13:55.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:13:55.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:13:55.330 Found net devices under 0000:af:00.0: cvl_0_0 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:13:55.330 Found net devices under 0000:af:00.1: cvl_0_1 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.330 01:16:30 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.330 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:13:55.330 00:13:55.330 --- 10.0.0.2 ping statistics --- 00:13:55.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.330 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:13:55.330 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:13:55.592 00:13:55.592 --- 10.0.0.1 ping statistics --- 00:13:55.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.592 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=4048048 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 4048048 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 4048048 ']' 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.592 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:55.592 [2024-05-15 01:16:31.130295] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:13:55.592 [2024-05-15 01:16:31.130349] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.592 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.592 [2024-05-15 01:16:31.205399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.592 [2024-05-15 01:16:31.280431] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.592 [2024-05-15 01:16:31.280465] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.592 [2024-05-15 01:16:31.280475] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.592 [2024-05-15 01:16:31.280484] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.592 [2024-05-15 01:16:31.280491] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.592 [2024-05-15 01:16:31.280537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.592 [2024-05-15 01:16:31.280633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.592 [2024-05-15 01:16:31.280635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.529 01:16:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.529 [2024-05-15 01:16:32.118166] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.529 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.788 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:56.788 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.048 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:57.048 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:57.048 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:57.308 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=17f62588-7a18-4b27-8813-f4c01249ca6e 00:13:57.308 01:16:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 17f62588-7a18-4b27-8813-f4c01249ca6e lvol 20 00:13:57.568 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7feecf20-df7f-4c14-8e5b-68261a59ddeb 00:13:57.568 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:57.568 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7feecf20-df7f-4c14-8e5b-68261a59ddeb 00:13:57.828 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:58.087 [2024-05-15 01:16:33.590332] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:58.087 [2024-05-15 01:16:33.590637] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.087 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:58.087 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=4048435 00:13:58.087 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:58.087 01:16:33 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:58.346 EAL: No free 2048 kB hugepages reported on node 1 00:13:59.285 01:16:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7feecf20-df7f-4c14-8e5b-68261a59ddeb MY_SNAPSHOT 00:13:59.544 01:16:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e56667ea-71ed-4ef8-8dd7-7ffd3b4dfe8f 00:13:59.544 01:16:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7feecf20-df7f-4c14-8e5b-68261a59ddeb 30 00:13:59.544 01:16:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e56667ea-71ed-4ef8-8dd7-7ffd3b4dfe8f MY_CLONE 00:13:59.804 01:16:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ff9bc381-18f1-4bf3-b91c-e89607413767 00:13:59.804 01:16:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ff9bc381-18f1-4bf3-b91c-e89607413767 00:14:00.372 01:16:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 4048435 00:14:08.557 Initializing NVMe Controllers 00:14:08.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:08.557 Controller IO queue size 128, less than required. 00:14:08.557 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:08.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:08.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:08.557 Initialization complete. Launching workers. 00:14:08.557 ======================================================== 00:14:08.557 Latency(us) 00:14:08.557 Device Information : IOPS MiB/s Average min max 00:14:08.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12573.90 49.12 10183.28 1871.24 56902.77 00:14:08.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12391.60 48.40 10333.03 3744.41 50140.83 00:14:08.557 ======================================================== 00:14:08.557 Total : 24965.50 97.52 10257.61 1871.24 56902.77 00:14:08.557 00:14:08.557 01:16:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:08.557 01:16:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7feecf20-df7f-4c14-8e5b-68261a59ddeb 00:14:08.817 01:16:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17f62588-7a18-4b27-8813-f4c01249ca6e 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:09.076 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:09.077 rmmod nvme_tcp 00:14:09.077 rmmod nvme_fabrics 00:14:09.077 rmmod nvme_keyring 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 4048048 ']' 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 4048048 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 4048048 ']' 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 4048048 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4048048 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4048048' 00:14:09.077 killing process with pid 4048048 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 4048048 00:14:09.077 [2024-05-15 01:16:44.695162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:09.077 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 4048048 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.336 01:16:44 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.872 01:16:47 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:11.872 00:14:11.872 real 0m22.811s 00:14:11.872 user 1m1.672s 00:14:11.872 sys 0m9.748s 00:14:11.872 01:16:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:11.872 01:16:47 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:11.872 ************************************ 00:14:11.872 END TEST nvmf_lvol 00:14:11.872 ************************************ 00:14:11.872 01:16:47 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:11.872 01:16:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:11.872 01:16:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:11.872 01:16:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.872 ************************************ 00:14:11.872 START TEST nvmf_lvs_grow 00:14:11.872 ************************************ 00:14:11.872 01:16:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:11.872 * Looking for test storage... 00:14:11.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.872 01:16:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.873 01:16:47 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:18.443 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:18.443 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.443 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:18.444 Found net devices under 0000:af:00.0: cvl_0_0 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:18.444 Found net devices under 0000:af:00.1: cvl_0_1 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.444 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:14:18.702 00:14:18.702 --- 10.0.0.2 ping statistics --- 00:14:18.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.702 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:14:18.702 00:14:18.702 --- 10.0.0.1 ping statistics --- 00:14:18.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.702 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.702 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=4053997 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 4053997 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 4053997 ']' 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:18.962 01:16:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:18.962 [2024-05-15 01:16:54.451755] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:14:18.962 [2024-05-15 01:16:54.451799] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.962 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.962 [2024-05-15 01:16:54.526768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.962 [2024-05-15 01:16:54.598685] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.962 [2024-05-15 01:16:54.598726] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.962 [2024-05-15 01:16:54.598736] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.962 [2024-05-15 01:16:54.598744] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.962 [2024-05-15 01:16:54.598752] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.962 [2024-05-15 01:16:54.598781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.898 [2024-05-15 01:16:55.434510] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:19.898 ************************************ 00:14:19.898 START TEST lvs_grow_clean 00:14:19.898 ************************************ 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.898 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:20.162 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:20.162 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:20.421 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:20.421 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:20.421 01:16:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:20.421 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:20.421 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:20.421 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 892c1a2a-dd6d-439c-8550-1c40e1077296 lvol 150 00:14:20.679 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ab508257-82f5-4d88-9b8b-062102e3ae9d 00:14:20.679 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:20.679 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:20.679 [2024-05-15 01:16:56.343613] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:20.679 [2024-05-15 01:16:56.343664] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:20.679 true 00:14:20.679 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:20.679 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:20.938 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:20.938 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:21.197 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ab508257-82f5-4d88-9b8b-062102e3ae9d 00:14:21.197 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.456 [2024-05-15 01:16:56.973302] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:21.456 [2024-05-15 01:16:56.973560] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.456 01:16:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.456 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4054559 00:14:21.456 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.456 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4054559 /var/tmp/bdevperf.sock 00:14:21.456 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 4054559 ']' 00:14:21.715 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.715 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:21.715 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.715 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:21.715 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:21.715 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:21.715 [2024-05-15 01:16:57.191261] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:14:21.715 [2024-05-15 01:16:57.191311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4054559 ] 00:14:21.715 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.715 [2024-05-15 01:16:57.258947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.715 [2024-05-15 01:16:57.331174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.282 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:22.282 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:14:22.282 01:16:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:22.849 Nvme0n1 00:14:22.849 01:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:22.849 [ 00:14:22.849 { 00:14:22.849 "name": "Nvme0n1", 00:14:22.849 "aliases": [ 00:14:22.849 "ab508257-82f5-4d88-9b8b-062102e3ae9d" 00:14:22.849 ], 00:14:22.849 "product_name": "NVMe disk", 00:14:22.849 "block_size": 4096, 00:14:22.849 "num_blocks": 38912, 00:14:22.849 "uuid": "ab508257-82f5-4d88-9b8b-062102e3ae9d", 00:14:22.849 "assigned_rate_limits": { 00:14:22.849 "rw_ios_per_sec": 0, 00:14:22.849 "rw_mbytes_per_sec": 0, 00:14:22.849 "r_mbytes_per_sec": 0, 00:14:22.849 "w_mbytes_per_sec": 0 00:14:22.849 }, 00:14:22.849 "claimed": false, 00:14:22.849 "zoned": false, 00:14:22.849 "supported_io_types": { 00:14:22.849 "read": true, 00:14:22.849 "write": true, 00:14:22.849 "unmap": true, 00:14:22.849 "write_zeroes": true, 00:14:22.849 "flush": true, 00:14:22.849 "reset": true, 00:14:22.849 "compare": true, 00:14:22.849 "compare_and_write": true, 00:14:22.849 "abort": true, 00:14:22.849 "nvme_admin": true, 00:14:22.849 "nvme_io": true 00:14:22.849 }, 00:14:22.849 "memory_domains": [ 00:14:22.849 { 00:14:22.849 "dma_device_id": "system", 00:14:22.849 "dma_device_type": 1 00:14:22.849 } 00:14:22.849 ], 00:14:22.849 "driver_specific": { 00:14:22.849 "nvme": [ 00:14:22.849 { 00:14:22.849 "trid": { 00:14:22.849 "trtype": "TCP", 00:14:22.849 "adrfam": "IPv4", 00:14:22.849 "traddr": "10.0.0.2", 00:14:22.849 "trsvcid": "4420", 00:14:22.849 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:22.849 }, 00:14:22.850 "ctrlr_data": { 00:14:22.850 "cntlid": 1, 00:14:22.850 "vendor_id": "0x8086", 00:14:22.850 "model_number": "SPDK bdev Controller", 00:14:22.850 "serial_number": "SPDK0", 00:14:22.850 "firmware_revision": "24.05", 00:14:22.850 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:22.850 "oacs": { 00:14:22.850 "security": 0, 00:14:22.850 "format": 0, 00:14:22.850 "firmware": 0, 00:14:22.850 "ns_manage": 0 00:14:22.850 }, 00:14:22.850 "multi_ctrlr": true, 00:14:22.850 "ana_reporting": false 00:14:22.850 }, 00:14:22.850 "vs": { 00:14:22.850 "nvme_version": "1.3" 00:14:22.850 }, 00:14:22.850 "ns_data": { 00:14:22.850 "id": 1, 00:14:22.850 "can_share": true 00:14:22.850 } 00:14:22.850 } 00:14:22.850 ], 00:14:22.850 "mp_policy": "active_passive" 00:14:22.850 } 00:14:22.850 } 00:14:22.850 ] 00:14:22.850 01:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4054831 00:14:22.850 01:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:22.850 01:16:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:23.108 Running I/O for 10 seconds... 00:14:24.046 Latency(us) 00:14:24.046 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.046 Nvme0n1 : 1.00 23757.00 92.80 0.00 0.00 0.00 0.00 0.00 00:14:24.046 =================================================================================================================== 00:14:24.046 Total : 23757.00 92.80 0.00 0.00 0.00 0.00 0.00 00:14:24.046 00:14:25.016 01:17:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:25.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.016 Nvme0n1 : 2.00 23767.50 92.84 0.00 0.00 0.00 0.00 0.00 00:14:25.016 =================================================================================================================== 00:14:25.016 Total : 23767.50 92.84 0.00 0.00 0.00 0.00 0.00 00:14:25.016 00:14:25.016 true 00:14:25.016 01:17:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:25.016 01:17:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:25.275 01:17:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:25.275 01:17:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:25.275 01:17:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 4054831 00:14:26.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.210 Nvme0n1 : 3.00 23882.33 93.29 0.00 0.00 0.00 0.00 0.00 00:14:26.210 =================================================================================================================== 00:14:26.210 Total : 23882.33 93.29 0.00 0.00 0.00 0.00 0.00 00:14:26.210 00:14:27.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.150 Nvme0n1 : 4.00 24003.25 93.76 0.00 0.00 0.00 0.00 0.00 00:14:27.150 =================================================================================================================== 00:14:27.150 Total : 24003.25 93.76 0.00 0.00 0.00 0.00 0.00 00:14:27.150 00:14:28.084 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.084 Nvme0n1 : 5.00 24075.80 94.05 0.00 0.00 0.00 0.00 0.00 00:14:28.084 =================================================================================================================== 00:14:28.084 Total : 24075.80 94.05 0.00 0.00 0.00 0.00 0.00 00:14:28.084 00:14:29.020 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.020 Nvme0n1 : 6.00 24130.83 94.26 0.00 0.00 0.00 0.00 0.00 00:14:29.020 =================================================================================================================== 00:14:29.020 Total : 24130.83 94.26 0.00 0.00 0.00 0.00 0.00 00:14:29.020 00:14:29.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.965 Nvme0n1 : 7.00 24162.71 94.39 0.00 0.00 0.00 0.00 0.00 00:14:29.965 =================================================================================================================== 00:14:29.965 Total : 24162.71 94.39 0.00 0.00 0.00 0.00 0.00 00:14:29.965 00:14:30.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.904 Nvme0n1 : 8.00 24194.12 94.51 0.00 0.00 0.00 0.00 0.00 00:14:30.904 =================================================================================================================== 00:14:30.904 Total : 24194.12 94.51 0.00 0.00 0.00 0.00 0.00 00:14:30.904 00:14:32.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.281 Nvme0n1 : 9.00 24213.78 94.59 0.00 0.00 0.00 0.00 0.00 00:14:32.281 =================================================================================================================== 00:14:32.281 Total : 24213.78 94.59 0.00 0.00 0.00 0.00 0.00 00:14:32.281 00:14:33.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.218 Nvme0n1 : 10.00 24238.70 94.68 0.00 0.00 0.00 0.00 0.00 00:14:33.218 =================================================================================================================== 00:14:33.218 Total : 24238.70 94.68 0.00 0.00 0.00 0.00 0.00 00:14:33.218 00:14:33.218 00:14:33.218 Latency(us) 00:14:33.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.218 Nvme0n1 : 10.00 24236.42 94.67 0.00 0.00 5277.59 2726.30 15938.36 00:14:33.218 =================================================================================================================== 00:14:33.218 Total : 24236.42 94.67 0.00 0.00 5277.59 2726.30 15938.36 00:14:33.218 0 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4054559 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 4054559 ']' 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 4054559 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4054559 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4054559' 00:14:33.218 killing process with pid 4054559 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 4054559 00:14:33.218 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.218 00:14:33.218 Latency(us) 00:14:33.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.218 =================================================================================================================== 00:14:33.218 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 4054559 00:14:33.218 01:17:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.477 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.736 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:33.736 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:33.736 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:33.736 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:33.736 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:33.995 [2024-05-15 01:17:09.514655] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:33.995 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:34.254 request: 00:14:34.254 { 00:14:34.254 "uuid": "892c1a2a-dd6d-439c-8550-1c40e1077296", 00:14:34.254 "method": "bdev_lvol_get_lvstores", 00:14:34.254 "req_id": 1 00:14:34.254 } 00:14:34.254 Got JSON-RPC error response 00:14:34.254 response: 00:14:34.254 { 00:14:34.254 "code": -19, 00:14:34.254 "message": "No such device" 00:14:34.254 } 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:34.254 aio_bdev 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ab508257-82f5-4d88-9b8b-062102e3ae9d 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=ab508257-82f5-4d88-9b8b-062102e3ae9d 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:34.254 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:34.255 01:17:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:34.513 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab508257-82f5-4d88-9b8b-062102e3ae9d -t 2000 00:14:34.513 [ 00:14:34.513 { 00:14:34.513 "name": "ab508257-82f5-4d88-9b8b-062102e3ae9d", 00:14:34.513 "aliases": [ 00:14:34.513 "lvs/lvol" 00:14:34.513 ], 00:14:34.513 "product_name": "Logical Volume", 00:14:34.513 "block_size": 4096, 00:14:34.513 "num_blocks": 38912, 00:14:34.513 "uuid": "ab508257-82f5-4d88-9b8b-062102e3ae9d", 00:14:34.513 "assigned_rate_limits": { 00:14:34.513 "rw_ios_per_sec": 0, 00:14:34.513 "rw_mbytes_per_sec": 0, 00:14:34.513 "r_mbytes_per_sec": 0, 00:14:34.513 "w_mbytes_per_sec": 0 00:14:34.513 }, 00:14:34.513 "claimed": false, 00:14:34.513 "zoned": false, 00:14:34.513 "supported_io_types": { 00:14:34.513 "read": true, 00:14:34.513 "write": true, 00:14:34.513 "unmap": true, 00:14:34.513 "write_zeroes": true, 00:14:34.513 "flush": false, 00:14:34.513 "reset": true, 00:14:34.513 "compare": false, 00:14:34.513 "compare_and_write": false, 00:14:34.513 "abort": false, 00:14:34.513 "nvme_admin": false, 00:14:34.513 "nvme_io": false 00:14:34.513 }, 00:14:34.513 "driver_specific": { 00:14:34.513 "lvol": { 00:14:34.513 "lvol_store_uuid": "892c1a2a-dd6d-439c-8550-1c40e1077296", 00:14:34.513 "base_bdev": "aio_bdev", 00:14:34.513 "thin_provision": false, 00:14:34.513 "num_allocated_clusters": 38, 00:14:34.513 "snapshot": false, 00:14:34.513 "clone": false, 00:14:34.513 "esnap_clone": false 00:14:34.513 } 00:14:34.513 } 00:14:34.513 } 00:14:34.513 ] 00:14:34.513 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:34.513 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:34.513 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:34.771 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:34.771 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:34.771 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:35.028 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:35.028 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab508257-82f5-4d88-9b8b-062102e3ae9d 00:14:35.028 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 892c1a2a-dd6d-439c-8550-1c40e1077296 00:14:35.399 01:17:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:35.399 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.399 00:14:35.399 real 0m15.570s 00:14:35.399 user 0m14.707s 00:14:35.399 sys 0m1.966s 00:14:35.400 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:35.400 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:35.400 ************************************ 00:14:35.400 END TEST lvs_grow_clean 00:14:35.400 ************************************ 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:35.658 ************************************ 00:14:35.658 START TEST lvs_grow_dirty 00:14:35.658 ************************************ 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:35.658 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.916 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:35.916 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:35.916 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=451838ee-1021-489d-8985-97e12fdab4fa 00:14:35.916 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:35.916 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:36.175 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:36.175 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:36.175 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 451838ee-1021-489d-8985-97e12fdab4fa lvol 150 00:14:36.433 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:36.433 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:36.433 01:17:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:36.433 [2024-05-15 01:17:12.073396] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:36.433 [2024-05-15 01:17:12.073449] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:36.433 true 00:14:36.433 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:36.433 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:36.692 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:36.692 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:36.951 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:36.951 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:37.209 [2024-05-15 01:17:12.731378] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.209 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=4057292 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 4057292 /var/tmp/bdevperf.sock 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 4057292 ']' 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:37.210 01:17:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:37.469 [2024-05-15 01:17:12.938221] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:14:37.469 [2024-05-15 01:17:12.938279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4057292 ] 00:14:37.469 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.469 [2024-05-15 01:17:13.007554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.469 [2024-05-15 01:17:13.075418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.404 01:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:38.404 01:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:38.404 01:17:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:38.404 Nvme0n1 00:14:38.404 01:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:38.693 [ 00:14:38.693 { 00:14:38.693 "name": "Nvme0n1", 00:14:38.693 "aliases": [ 00:14:38.693 "38d7399b-14e0-4a5b-a74f-5a87c494ad24" 00:14:38.693 ], 00:14:38.693 "product_name": "NVMe disk", 00:14:38.693 "block_size": 4096, 00:14:38.693 "num_blocks": 38912, 00:14:38.693 "uuid": "38d7399b-14e0-4a5b-a74f-5a87c494ad24", 00:14:38.693 "assigned_rate_limits": { 00:14:38.693 "rw_ios_per_sec": 0, 00:14:38.693 "rw_mbytes_per_sec": 0, 00:14:38.693 "r_mbytes_per_sec": 0, 00:14:38.693 "w_mbytes_per_sec": 0 00:14:38.693 }, 00:14:38.693 "claimed": false, 00:14:38.693 "zoned": false, 00:14:38.693 "supported_io_types": { 00:14:38.693 "read": true, 00:14:38.693 "write": true, 00:14:38.693 "unmap": true, 00:14:38.693 "write_zeroes": true, 00:14:38.693 "flush": true, 00:14:38.693 "reset": true, 00:14:38.693 "compare": true, 00:14:38.693 "compare_and_write": true, 00:14:38.693 "abort": true, 00:14:38.693 "nvme_admin": true, 00:14:38.693 "nvme_io": true 00:14:38.693 }, 00:14:38.693 "memory_domains": [ 00:14:38.693 { 00:14:38.693 "dma_device_id": "system", 00:14:38.693 "dma_device_type": 1 00:14:38.693 } 00:14:38.693 ], 00:14:38.693 "driver_specific": { 00:14:38.693 "nvme": [ 00:14:38.693 { 00:14:38.693 "trid": { 00:14:38.693 "trtype": "TCP", 00:14:38.693 "adrfam": "IPv4", 00:14:38.693 "traddr": "10.0.0.2", 00:14:38.693 "trsvcid": "4420", 00:14:38.693 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:38.693 }, 00:14:38.693 "ctrlr_data": { 00:14:38.693 "cntlid": 1, 00:14:38.693 "vendor_id": "0x8086", 00:14:38.693 "model_number": "SPDK bdev Controller", 00:14:38.693 "serial_number": "SPDK0", 00:14:38.693 "firmware_revision": "24.05", 00:14:38.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:38.693 "oacs": { 00:14:38.693 "security": 0, 00:14:38.693 "format": 0, 00:14:38.693 "firmware": 0, 00:14:38.693 "ns_manage": 0 00:14:38.693 }, 00:14:38.693 "multi_ctrlr": true, 00:14:38.693 "ana_reporting": false 00:14:38.693 }, 00:14:38.693 "vs": { 00:14:38.693 "nvme_version": "1.3" 00:14:38.693 }, 00:14:38.693 "ns_data": { 00:14:38.693 "id": 1, 00:14:38.693 "can_share": true 00:14:38.693 } 00:14:38.693 } 00:14:38.693 ], 00:14:38.693 "mp_policy": "active_passive" 00:14:38.693 } 00:14:38.693 } 00:14:38.693 ] 00:14:38.693 01:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=4057558 00:14:38.693 01:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:38.693 01:17:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:38.693 Running I/O for 10 seconds... 00:14:40.071 Latency(us) 00:14:40.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:40.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.071 Nvme0n1 : 1.00 22626.00 88.38 0.00 0.00 0.00 0.00 0.00 00:14:40.071 =================================================================================================================== 00:14:40.071 Total : 22626.00 88.38 0.00 0.00 0.00 0.00 0.00 00:14:40.071 00:14:40.638 01:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:40.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.897 Nvme0n1 : 2.00 22989.00 89.80 0.00 0.00 0.00 0.00 0.00 00:14:40.897 =================================================================================================================== 00:14:40.897 Total : 22989.00 89.80 0.00 0.00 0.00 0.00 0.00 00:14:40.897 00:14:40.897 true 00:14:40.897 01:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:40.897 01:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:41.157 01:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:41.157 01:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:41.157 01:17:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 4057558 00:14:41.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.723 Nvme0n1 : 3.00 23024.67 89.94 0.00 0.00 0.00 0.00 0.00 00:14:41.723 =================================================================================================================== 00:14:41.723 Total : 23024.67 89.94 0.00 0.00 0.00 0.00 0.00 00:14:41.723 00:14:42.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.661 Nvme0n1 : 4.00 23128.50 90.35 0.00 0.00 0.00 0.00 0.00 00:14:42.661 =================================================================================================================== 00:14:42.661 Total : 23128.50 90.35 0.00 0.00 0.00 0.00 0.00 00:14:42.661 00:14:44.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.039 Nvme0n1 : 5.00 23203.60 90.64 0.00 0.00 0.00 0.00 0.00 00:14:44.039 =================================================================================================================== 00:14:44.039 Total : 23203.60 90.64 0.00 0.00 0.00 0.00 0.00 00:14:44.039 00:14:44.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.976 Nvme0n1 : 6.00 23208.33 90.66 0.00 0.00 0.00 0.00 0.00 00:14:44.976 =================================================================================================================== 00:14:44.976 Total : 23208.33 90.66 0.00 0.00 0.00 0.00 0.00 00:14:44.976 00:14:45.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.914 Nvme0n1 : 7.00 23224.29 90.72 0.00 0.00 0.00 0.00 0.00 00:14:45.914 =================================================================================================================== 00:14:45.914 Total : 23224.29 90.72 0.00 0.00 0.00 0.00 0.00 00:14:45.914 00:14:46.851 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.851 Nvme0n1 : 8.00 23265.25 90.88 0.00 0.00 0.00 0.00 0.00 00:14:46.851 =================================================================================================================== 00:14:46.851 Total : 23265.25 90.88 0.00 0.00 0.00 0.00 0.00 00:14:46.851 00:14:47.788 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.788 Nvme0n1 : 9.00 23294.44 90.99 0.00 0.00 0.00 0.00 0.00 00:14:47.788 =================================================================================================================== 00:14:47.788 Total : 23294.44 90.99 0.00 0.00 0.00 0.00 0.00 00:14:47.788 00:14:48.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.724 Nvme0n1 : 10.00 23327.40 91.12 0.00 0.00 0.00 0.00 0.00 00:14:48.724 =================================================================================================================== 00:14:48.724 Total : 23327.40 91.12 0.00 0.00 0.00 0.00 0.00 00:14:48.724 00:14:48.724 00:14:48.724 Latency(us) 00:14:48.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.724 Nvme0n1 : 10.01 23327.15 91.12 0.00 0.00 5482.99 2988.44 24431.82 00:14:48.724 =================================================================================================================== 00:14:48.724 Total : 23327.15 91.12 0.00 0.00 5482.99 2988.44 24431.82 00:14:48.724 0 00:14:48.724 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 4057292 00:14:48.724 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 4057292 ']' 00:14:48.724 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 4057292 00:14:48.724 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:48.724 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:48.724 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4057292 00:14:48.983 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:48.984 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:48.984 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4057292' 00:14:48.984 killing process with pid 4057292 00:14:48.984 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 4057292 00:14:48.984 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.984 00:14:48.984 Latency(us) 00:14:48.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.984 =================================================================================================================== 00:14:48.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.984 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 4057292 00:14:48.984 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.242 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:49.502 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:49.502 01:17:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:49.502 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:49.502 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:49.502 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 4053997 00:14:49.502 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 4053997 00:14:49.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 4053997 Killed "${NVMF_APP[@]}" "$@" 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=4059433 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 4059433 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 4059433 ']' 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:49.762 01:17:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:49.762 [2024-05-15 01:17:25.258050] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:14:49.763 [2024-05-15 01:17:25.258097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.763 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.763 [2024-05-15 01:17:25.333515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.763 [2024-05-15 01:17:25.406636] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.763 [2024-05-15 01:17:25.406673] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.763 [2024-05-15 01:17:25.406683] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.763 [2024-05-15 01:17:25.406691] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.763 [2024-05-15 01:17:25.406698] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.763 [2024-05-15 01:17:25.406723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:50.700 [2024-05-15 01:17:26.246975] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:50.700 [2024-05-15 01:17:26.247060] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:50.700 [2024-05-15 01:17:26.247085] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:50.700 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:50.959 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38d7399b-14e0-4a5b-a74f-5a87c494ad24 -t 2000 00:14:50.959 [ 00:14:50.959 { 00:14:50.959 "name": "38d7399b-14e0-4a5b-a74f-5a87c494ad24", 00:14:50.959 "aliases": [ 00:14:50.959 "lvs/lvol" 00:14:50.959 ], 00:14:50.959 "product_name": "Logical Volume", 00:14:50.959 "block_size": 4096, 00:14:50.959 "num_blocks": 38912, 00:14:50.959 "uuid": "38d7399b-14e0-4a5b-a74f-5a87c494ad24", 00:14:50.959 "assigned_rate_limits": { 00:14:50.959 "rw_ios_per_sec": 0, 00:14:50.959 "rw_mbytes_per_sec": 0, 00:14:50.959 "r_mbytes_per_sec": 0, 00:14:50.959 "w_mbytes_per_sec": 0 00:14:50.959 }, 00:14:50.959 "claimed": false, 00:14:50.959 "zoned": false, 00:14:50.959 "supported_io_types": { 00:14:50.959 "read": true, 00:14:50.959 "write": true, 00:14:50.959 "unmap": true, 00:14:50.959 "write_zeroes": true, 00:14:50.959 "flush": false, 00:14:50.959 "reset": true, 00:14:50.959 "compare": false, 00:14:50.959 "compare_and_write": false, 00:14:50.959 "abort": false, 00:14:50.959 "nvme_admin": false, 00:14:50.959 "nvme_io": false 00:14:50.959 }, 00:14:50.959 "driver_specific": { 00:14:50.959 "lvol": { 00:14:50.959 "lvol_store_uuid": "451838ee-1021-489d-8985-97e12fdab4fa", 00:14:50.959 "base_bdev": "aio_bdev", 00:14:50.959 "thin_provision": false, 00:14:50.959 "num_allocated_clusters": 38, 00:14:50.959 "snapshot": false, 00:14:50.959 "clone": false, 00:14:50.959 "esnap_clone": false 00:14:50.959 } 00:14:50.959 } 00:14:50.959 } 00:14:50.959 ] 00:14:50.959 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:50.959 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:50.959 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:51.218 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:51.218 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:51.218 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:51.477 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:51.477 01:17:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.477 [2024-05-15 01:17:27.083230] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:51.477 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:51.736 request: 00:14:51.736 { 00:14:51.736 "uuid": "451838ee-1021-489d-8985-97e12fdab4fa", 00:14:51.736 "method": "bdev_lvol_get_lvstores", 00:14:51.736 "req_id": 1 00:14:51.736 } 00:14:51.736 Got JSON-RPC error response 00:14:51.736 response: 00:14:51.736 { 00:14:51.736 "code": -19, 00:14:51.736 "message": "No such device" 00:14:51.736 } 00:14:51.736 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:51.736 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:51.736 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:51.736 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:51.736 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:51.994 aio_bdev 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:51.994 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38d7399b-14e0-4a5b-a74f-5a87c494ad24 -t 2000 00:14:52.252 [ 00:14:52.252 { 00:14:52.252 "name": "38d7399b-14e0-4a5b-a74f-5a87c494ad24", 00:14:52.252 "aliases": [ 00:14:52.252 "lvs/lvol" 00:14:52.252 ], 00:14:52.252 "product_name": "Logical Volume", 00:14:52.252 "block_size": 4096, 00:14:52.252 "num_blocks": 38912, 00:14:52.252 "uuid": "38d7399b-14e0-4a5b-a74f-5a87c494ad24", 00:14:52.252 "assigned_rate_limits": { 00:14:52.252 "rw_ios_per_sec": 0, 00:14:52.252 "rw_mbytes_per_sec": 0, 00:14:52.252 "r_mbytes_per_sec": 0, 00:14:52.252 "w_mbytes_per_sec": 0 00:14:52.252 }, 00:14:52.252 "claimed": false, 00:14:52.252 "zoned": false, 00:14:52.252 "supported_io_types": { 00:14:52.252 "read": true, 00:14:52.252 "write": true, 00:14:52.252 "unmap": true, 00:14:52.252 "write_zeroes": true, 00:14:52.252 "flush": false, 00:14:52.252 "reset": true, 00:14:52.252 "compare": false, 00:14:52.252 "compare_and_write": false, 00:14:52.252 "abort": false, 00:14:52.252 "nvme_admin": false, 00:14:52.252 "nvme_io": false 00:14:52.252 }, 00:14:52.252 "driver_specific": { 00:14:52.252 "lvol": { 00:14:52.252 "lvol_store_uuid": "451838ee-1021-489d-8985-97e12fdab4fa", 00:14:52.252 "base_bdev": "aio_bdev", 00:14:52.252 "thin_provision": false, 00:14:52.252 "num_allocated_clusters": 38, 00:14:52.252 "snapshot": false, 00:14:52.252 "clone": false, 00:14:52.252 "esnap_clone": false 00:14:52.252 } 00:14:52.252 } 00:14:52.252 } 00:14:52.252 ] 00:14:52.252 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:52.252 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:52.252 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:52.252 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:52.253 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:52.253 01:17:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:52.510 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:52.510 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38d7399b-14e0-4a5b-a74f-5a87c494ad24 00:14:52.769 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 451838ee-1021-489d-8985-97e12fdab4fa 00:14:52.769 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:53.027 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:53.027 00:14:53.027 real 0m17.477s 00:14:53.027 user 0m43.537s 00:14:53.027 sys 0m4.996s 00:14:53.027 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:53.027 01:17:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:53.027 ************************************ 00:14:53.027 END TEST lvs_grow_dirty 00:14:53.027 ************************************ 00:14:53.027 01:17:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:53.028 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:53.028 nvmf_trace.0 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.326 rmmod nvme_tcp 00:14:53.326 rmmod nvme_fabrics 00:14:53.326 rmmod nvme_keyring 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 4059433 ']' 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 4059433 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 4059433 ']' 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 4059433 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4059433 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4059433' 00:14:53.326 killing process with pid 4059433 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 4059433 00:14:53.326 01:17:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 4059433 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.585 01:17:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.491 01:17:31 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:55.491 00:14:55.491 real 0m44.029s 00:14:55.491 user 1m4.356s 00:14:55.491 sys 0m12.951s 00:14:55.491 01:17:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:55.491 01:17:31 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.491 ************************************ 00:14:55.491 END TEST nvmf_lvs_grow 00:14:55.491 ************************************ 00:14:55.491 01:17:31 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:55.491 01:17:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:55.491 01:17:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:55.491 01:17:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:55.750 ************************************ 00:14:55.750 START TEST nvmf_bdev_io_wait 00:14:55.750 ************************************ 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:55.750 * Looking for test storage... 00:14:55.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.750 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:55.751 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:55.751 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:14:55.751 01:17:31 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:02.322 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:02.322 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:02.322 Found net devices under 0000:af:00.0: cvl_0_0 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:02.322 Found net devices under 0000:af:00.1: cvl_0_1 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:02.322 01:17:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:02.322 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:02.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:02.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:15:02.582 00:15:02.582 --- 10.0.0.2 ping statistics --- 00:15:02.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.582 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:02.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:02.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.243 ms 00:15:02.582 00:15:02.582 --- 10.0.0.1 ping statistics --- 00:15:02.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:02.582 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:02.582 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=4063726 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 4063726 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 4063726 ']' 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:02.583 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:02.583 [2024-05-15 01:17:38.124383] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:02.583 [2024-05-15 01:17:38.124429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.583 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.583 [2024-05-15 01:17:38.199342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:02.842 [2024-05-15 01:17:38.274885] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.842 [2024-05-15 01:17:38.274922] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.842 [2024-05-15 01:17:38.274931] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.842 [2024-05-15 01:17:38.274940] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.842 [2024-05-15 01:17:38.274948] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.842 [2024-05-15 01:17:38.275000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.842 [2024-05-15 01:17:38.275116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.842 [2024-05-15 01:17:38.275223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:02.842 [2024-05-15 01:17:38.275225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.410 01:17:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 [2024-05-15 01:17:39.045626] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.410 Malloc0 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.410 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:03.670 [2024-05-15 01:17:39.118658] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:03.670 [2024-05-15 01:17:39.118914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=4064007 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=4064009 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.670 { 00:15:03.670 "params": { 00:15:03.670 "name": "Nvme$subsystem", 00:15:03.670 "trtype": "$TEST_TRANSPORT", 00:15:03.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.670 "adrfam": "ipv4", 00:15:03.670 "trsvcid": "$NVMF_PORT", 00:15:03.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.670 "hdgst": ${hdgst:-false}, 00:15:03.670 "ddgst": ${ddgst:-false} 00:15:03.670 }, 00:15:03.670 "method": "bdev_nvme_attach_controller" 00:15:03.670 } 00:15:03.670 EOF 00:15:03.670 )") 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=4064011 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.670 { 00:15:03.670 "params": { 00:15:03.670 "name": "Nvme$subsystem", 00:15:03.670 "trtype": "$TEST_TRANSPORT", 00:15:03.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.670 "adrfam": "ipv4", 00:15:03.670 "trsvcid": "$NVMF_PORT", 00:15:03.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.670 "hdgst": ${hdgst:-false}, 00:15:03.670 "ddgst": ${ddgst:-false} 00:15:03.670 }, 00:15:03.670 "method": "bdev_nvme_attach_controller" 00:15:03.670 } 00:15:03.670 EOF 00:15:03.670 )") 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=4064014 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.670 { 00:15:03.670 "params": { 00:15:03.670 "name": "Nvme$subsystem", 00:15:03.670 "trtype": "$TEST_TRANSPORT", 00:15:03.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.670 "adrfam": "ipv4", 00:15:03.670 "trsvcid": "$NVMF_PORT", 00:15:03.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.670 "hdgst": ${hdgst:-false}, 00:15:03.670 "ddgst": ${ddgst:-false} 00:15:03.670 }, 00:15:03.670 "method": "bdev_nvme_attach_controller" 00:15:03.670 } 00:15:03.670 EOF 00:15:03.670 )") 00:15:03.670 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:03.671 { 00:15:03.671 "params": { 00:15:03.671 "name": "Nvme$subsystem", 00:15:03.671 "trtype": "$TEST_TRANSPORT", 00:15:03.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:03.671 "adrfam": "ipv4", 00:15:03.671 "trsvcid": "$NVMF_PORT", 00:15:03.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:03.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:03.671 "hdgst": ${hdgst:-false}, 00:15:03.671 "ddgst": ${ddgst:-false} 00:15:03.671 }, 00:15:03.671 "method": "bdev_nvme_attach_controller" 00:15:03.671 } 00:15:03.671 EOF 00:15:03.671 )") 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 4064007 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.671 "params": { 00:15:03.671 "name": "Nvme1", 00:15:03.671 "trtype": "tcp", 00:15:03.671 "traddr": "10.0.0.2", 00:15:03.671 "adrfam": "ipv4", 00:15:03.671 "trsvcid": "4420", 00:15:03.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.671 "hdgst": false, 00:15:03.671 "ddgst": false 00:15:03.671 }, 00:15:03.671 "method": "bdev_nvme_attach_controller" 00:15:03.671 }' 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.671 "params": { 00:15:03.671 "name": "Nvme1", 00:15:03.671 "trtype": "tcp", 00:15:03.671 "traddr": "10.0.0.2", 00:15:03.671 "adrfam": "ipv4", 00:15:03.671 "trsvcid": "4420", 00:15:03.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.671 "hdgst": false, 00:15:03.671 "ddgst": false 00:15:03.671 }, 00:15:03.671 "method": "bdev_nvme_attach_controller" 00:15:03.671 }' 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.671 "params": { 00:15:03.671 "name": "Nvme1", 00:15:03.671 "trtype": "tcp", 00:15:03.671 "traddr": "10.0.0.2", 00:15:03.671 "adrfam": "ipv4", 00:15:03.671 "trsvcid": "4420", 00:15:03.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.671 "hdgst": false, 00:15:03.671 "ddgst": false 00:15:03.671 }, 00:15:03.671 "method": "bdev_nvme_attach_controller" 00:15:03.671 }' 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:03.671 01:17:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:03.671 "params": { 00:15:03.671 "name": "Nvme1", 00:15:03.671 "trtype": "tcp", 00:15:03.671 "traddr": "10.0.0.2", 00:15:03.671 "adrfam": "ipv4", 00:15:03.671 "trsvcid": "4420", 00:15:03.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:03.671 "hdgst": false, 00:15:03.671 "ddgst": false 00:15:03.671 }, 00:15:03.671 "method": "bdev_nvme_attach_controller" 00:15:03.671 }' 00:15:03.671 [2024-05-15 01:17:39.171311] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:03.671 [2024-05-15 01:17:39.171367] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:03.671 [2024-05-15 01:17:39.172322] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:03.671 [2024-05-15 01:17:39.172367] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:03.671 [2024-05-15 01:17:39.173680] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:03.671 [2024-05-15 01:17:39.173730] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:03.671 [2024-05-15 01:17:39.173749] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:03.671 [2024-05-15 01:17:39.173790] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:03.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.671 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.671 [2024-05-15 01:17:39.357479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.930 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.930 [2024-05-15 01:17:39.430239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:03.931 [2024-05-15 01:17:39.447078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.931 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.931 [2024-05-15 01:17:39.520057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:03.931 [2024-05-15 01:17:39.540449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.931 [2024-05-15 01:17:39.601966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.190 [2024-05-15 01:17:39.628203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:04.190 [2024-05-15 01:17:39.675847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:04.190 Running I/O for 1 seconds... 00:15:04.190 Running I/O for 1 seconds... 00:15:04.190 Running I/O for 1 seconds... 00:15:04.190 Running I/O for 1 seconds... 00:15:05.127 00:15:05.127 Latency(us) 00:15:05.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.127 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:05.127 Nvme1n1 : 1.00 14212.43 55.52 0.00 0.00 8979.93 4954.52 24746.39 00:15:05.127 =================================================================================================================== 00:15:05.127 Total : 14212.43 55.52 0.00 0.00 8979.93 4954.52 24746.39 00:15:05.127 00:15:05.127 Latency(us) 00:15:05.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.127 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:05.127 Nvme1n1 : 1.01 6332.58 24.74 0.00 0.00 20072.64 7811.89 31457.28 00:15:05.127 =================================================================================================================== 00:15:05.127 Total : 6332.58 24.74 0.00 0.00 20072.64 7811.89 31457.28 00:15:05.386 00:15:05.386 Latency(us) 00:15:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.386 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:05.386 Nvme1n1 : 1.00 255730.62 998.95 0.00 0.00 498.48 207.26 1133.77 00:15:05.386 =================================================================================================================== 00:15:05.386 Total : 255730.62 998.95 0.00 0.00 498.48 207.26 1133.77 00:15:05.386 00:15:05.386 Latency(us) 00:15:05.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.386 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:05.386 Nvme1n1 : 1.01 6490.83 25.35 0.00 0.00 19645.33 7287.60 39845.89 00:15:05.386 =================================================================================================================== 00:15:05.386 Total : 6490.83 25.35 0.00 0.00 19645.33 7287.60 39845.89 00:15:05.386 01:17:40 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 4064009 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 4064011 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 4064014 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:05.646 rmmod nvme_tcp 00:15:05.646 rmmod nvme_fabrics 00:15:05.646 rmmod nvme_keyring 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:05.646 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 4063726 ']' 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 4063726 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 4063726 ']' 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 4063726 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4063726 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4063726' 00:15:05.647 killing process with pid 4063726 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 4063726 00:15:05.647 [2024-05-15 01:17:41.259796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:05.647 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 4063726 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.906 01:17:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.442 01:17:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:08.442 00:15:08.442 real 0m12.302s 00:15:08.442 user 0m19.883s 00:15:08.442 sys 0m6.979s 00:15:08.442 01:17:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:08.442 01:17:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:08.442 ************************************ 00:15:08.442 END TEST nvmf_bdev_io_wait 00:15:08.442 ************************************ 00:15:08.442 01:17:43 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:08.442 01:17:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:08.442 01:17:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:08.442 01:17:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:08.442 ************************************ 00:15:08.442 START TEST nvmf_queue_depth 00:15:08.442 ************************************ 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:08.442 * Looking for test storage... 00:15:08.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.442 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:08.443 01:17:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.009 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:15.009 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:15.010 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:15.010 Found net devices under 0000:af:00.0: cvl_0_0 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:15.010 Found net devices under 0000:af:00.1: cvl_0_1 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:15.010 01:17:49 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:15.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:15.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:15:15.010 00:15:15.010 --- 10.0.0.2 ping statistics --- 00:15:15.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.010 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:15.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:15.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.284 ms 00:15:15.010 00:15:15.010 --- 10.0.0.1 ping statistics --- 00:15:15.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:15.010 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=4067997 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 4067997 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 4067997 ']' 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.010 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.010 [2024-05-15 01:17:50.139786] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:15.010 [2024-05-15 01:17:50.139834] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.010 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.010 [2024-05-15 01:17:50.212539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.010 [2024-05-15 01:17:50.280982] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.010 [2024-05-15 01:17:50.281026] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.010 [2024-05-15 01:17:50.281036] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.010 [2024-05-15 01:17:50.281044] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.010 [2024-05-15 01:17:50.281052] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.010 [2024-05-15 01:17:50.281075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.269 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:15.269 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:15.269 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.269 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.269 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 [2024-05-15 01:17:50.975746] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.554 01:17:50 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 Malloc0 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 [2024-05-15 01:17:51.031479] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:15.554 [2024-05-15 01:17:51.031713] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=4068079 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 4068079 /var/tmp/bdevperf.sock 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 4068079 ']' 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:15.554 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:15.554 [2024-05-15 01:17:51.080604] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:15.554 [2024-05-15 01:17:51.080648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4068079 ] 00:15:15.554 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.554 [2024-05-15 01:17:51.150461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.824 [2024-05-15 01:17:51.226955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.390 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.390 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:16.390 01:17:51 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:16.390 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.390 01:17:51 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:16.649 NVMe0n1 00:15:16.649 01:17:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.649 01:17:52 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:16.649 Running I/O for 10 seconds... 00:15:26.626 00:15:26.626 Latency(us) 00:15:26.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.626 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:26.626 Verification LBA range: start 0x0 length 0x4000 00:15:26.626 NVMe0n1 : 10.06 12985.40 50.72 0.00 0.00 78562.74 19503.51 56203.67 00:15:26.626 =================================================================================================================== 00:15:26.626 Total : 12985.40 50.72 0.00 0.00 78562.74 19503.51 56203.67 00:15:26.626 0 00:15:26.626 01:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 4068079 00:15:26.626 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 4068079 ']' 00:15:26.626 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 4068079 00:15:26.626 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:26.626 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:26.626 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4068079 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4068079' 00:15:26.885 killing process with pid 4068079 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 4068079 00:15:26.885 Received shutdown signal, test time was about 10.000000 seconds 00:15:26.885 00:15:26.885 Latency(us) 00:15:26.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.885 =================================================================================================================== 00:15:26.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 4068079 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.885 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.885 rmmod nvme_tcp 00:15:26.885 rmmod nvme_fabrics 00:15:27.145 rmmod nvme_keyring 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 4067997 ']' 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 4067997 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 4067997 ']' 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 4067997 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4067997 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4067997' 00:15:27.145 killing process with pid 4067997 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 4067997 00:15:27.145 [2024-05-15 01:18:02.655845] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:27.145 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 4067997 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:27.404 01:18:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.322 01:18:04 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.322 00:15:29.322 real 0m21.348s 00:15:29.322 user 0m24.810s 00:15:29.322 sys 0m6.869s 00:15:29.322 01:18:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.322 01:18:04 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:29.322 ************************************ 00:15:29.322 END TEST nvmf_queue_depth 00:15:29.322 ************************************ 00:15:29.323 01:18:04 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:29.323 01:18:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:29.323 01:18:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:29.323 01:18:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:29.582 ************************************ 00:15:29.582 START TEST nvmf_target_multipath 00:15:29.582 ************************************ 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:29.582 * Looking for test storage... 00:15:29.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:29.582 01:18:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:15:29.583 01:18:05 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.160 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:36.161 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:36.161 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:36.161 Found net devices under 0000:af:00.0: cvl_0_0 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:36.161 Found net devices under 0000:af:00.1: cvl_0_1 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.161 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:15:36.421 00:15:36.421 --- 10.0.0.2 ping statistics --- 00:15:36.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.421 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:36.421 00:15:36.421 --- 10.0.0.1 ping statistics --- 00:15:36.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.421 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:36.421 only one NIC for nvmf test 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:36.421 rmmod nvme_tcp 00:15:36.421 rmmod nvme_fabrics 00:15:36.421 rmmod nvme_keyring 00:15:36.421 01:18:11 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.421 01:18:12 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:38.960 00:15:38.960 real 0m9.087s 00:15:38.960 user 0m1.832s 00:15:38.960 sys 0m5.283s 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:38.960 01:18:14 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:38.960 ************************************ 00:15:38.960 END TEST nvmf_target_multipath 00:15:38.960 ************************************ 00:15:38.960 01:18:14 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:38.960 01:18:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:38.960 01:18:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:38.960 01:18:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:38.960 ************************************ 00:15:38.960 START TEST nvmf_zcopy 00:15:38.960 ************************************ 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:38.960 * Looking for test storage... 00:15:38.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.960 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:15:38.961 01:18:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.580 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:45.581 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:45.581 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:45.581 Found net devices under 0000:af:00.0: cvl_0_0 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:45.581 Found net devices under 0000:af:00.1: cvl_0_1 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.581 01:18:20 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:45.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:15:45.581 00:15:45.581 --- 10.0.0.2 ping statistics --- 00:15:45.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.581 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:15:45.581 00:15:45.581 --- 10.0.0.1 ping statistics --- 00:15:45.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.581 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=4078081 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 4078081 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 4078081 ']' 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:45.581 01:18:21 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:45.581 [2024-05-15 01:18:21.224462] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:45.581 [2024-05-15 01:18:21.224509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.581 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.840 [2024-05-15 01:18:21.298466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.840 [2024-05-15 01:18:21.369640] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.840 [2024-05-15 01:18:21.369678] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.840 [2024-05-15 01:18:21.369690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:45.840 [2024-05-15 01:18:21.369698] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:45.840 [2024-05-15 01:18:21.369705] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.840 [2024-05-15 01:18:21.369726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.407 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:46.407 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:46.407 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 [2024-05-15 01:18:22.060738] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 [2024-05-15 01:18:22.084751] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:46.408 [2024-05-15 01:18:22.084924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.408 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.667 malloc0 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:46.667 { 00:15:46.667 "params": { 00:15:46.667 "name": "Nvme$subsystem", 00:15:46.667 "trtype": "$TEST_TRANSPORT", 00:15:46.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:46.667 "adrfam": "ipv4", 00:15:46.667 "trsvcid": "$NVMF_PORT", 00:15:46.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:46.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:46.667 "hdgst": ${hdgst:-false}, 00:15:46.667 "ddgst": ${ddgst:-false} 00:15:46.667 }, 00:15:46.667 "method": "bdev_nvme_attach_controller" 00:15:46.667 } 00:15:46.667 EOF 00:15:46.667 )") 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:46.667 01:18:22 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:46.667 "params": { 00:15:46.667 "name": "Nvme1", 00:15:46.667 "trtype": "tcp", 00:15:46.667 "traddr": "10.0.0.2", 00:15:46.667 "adrfam": "ipv4", 00:15:46.667 "trsvcid": "4420", 00:15:46.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:46.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:46.667 "hdgst": false, 00:15:46.667 "ddgst": false 00:15:46.667 }, 00:15:46.667 "method": "bdev_nvme_attach_controller" 00:15:46.667 }' 00:15:46.667 [2024-05-15 01:18:22.166268] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:46.667 [2024-05-15 01:18:22.166315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4078112 ] 00:15:46.667 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.667 [2024-05-15 01:18:22.236970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.667 [2024-05-15 01:18:22.306135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.925 Running I/O for 10 seconds... 00:15:56.904 00:15:56.904 Latency(us) 00:15:56.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.904 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:56.904 Verification LBA range: start 0x0 length 0x1000 00:15:56.904 Nvme1n1 : 10.01 8764.39 68.47 0.00 0.00 14563.95 688.13 42152.76 00:15:56.904 =================================================================================================================== 00:15:56.904 Total : 8764.39 68.47 0.00 0.00 14563.95 688.13 42152.76 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=4079968 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:57.163 { 00:15:57.163 "params": { 00:15:57.163 "name": "Nvme$subsystem", 00:15:57.163 "trtype": "$TEST_TRANSPORT", 00:15:57.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:57.163 "adrfam": "ipv4", 00:15:57.163 "trsvcid": "$NVMF_PORT", 00:15:57.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:57.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:57.163 "hdgst": ${hdgst:-false}, 00:15:57.163 "ddgst": ${ddgst:-false} 00:15:57.163 }, 00:15:57.163 "method": "bdev_nvme_attach_controller" 00:15:57.163 } 00:15:57.163 EOF 00:15:57.163 )") 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:57.163 [2024-05-15 01:18:32.744326] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.744360] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:57.163 01:18:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:57.163 "params": { 00:15:57.163 "name": "Nvme1", 00:15:57.163 "trtype": "tcp", 00:15:57.163 "traddr": "10.0.0.2", 00:15:57.163 "adrfam": "ipv4", 00:15:57.163 "trsvcid": "4420", 00:15:57.163 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:57.163 "hdgst": false, 00:15:57.163 "ddgst": false 00:15:57.163 }, 00:15:57.163 "method": "bdev_nvme_attach_controller" 00:15:57.163 }' 00:15:57.163 [2024-05-15 01:18:32.756322] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.756337] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.163 [2024-05-15 01:18:32.768345] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.768356] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.163 [2024-05-15 01:18:32.779205] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:15:57.163 [2024-05-15 01:18:32.779251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4079968 ] 00:15:57.163 [2024-05-15 01:18:32.780374] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.780386] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.163 [2024-05-15 01:18:32.792408] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.792421] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.163 [2024-05-15 01:18:32.804440] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.804452] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.163 EAL: No free 2048 kB hugepages reported on node 1 00:15:57.163 [2024-05-15 01:18:32.816470] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.163 [2024-05-15 01:18:32.816481] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.164 [2024-05-15 01:18:32.828503] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.164 [2024-05-15 01:18:32.828515] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.164 [2024-05-15 01:18:32.840536] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.164 [2024-05-15 01:18:32.840548] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.164 [2024-05-15 01:18:32.848087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.164 [2024-05-15 01:18:32.852571] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.164 [2024-05-15 01:18:32.852585] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.864601] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.864615] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.876631] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.876643] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.888671] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.888693] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.900698] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.900710] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.912730] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.912743] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.918625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.423 [2024-05-15 01:18:32.924762] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.924775] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.936807] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.936828] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.948832] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.948847] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.960862] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.960874] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.972893] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.972906] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.984925] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.984938] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.423 [2024-05-15 01:18:32.996952] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.423 [2024-05-15 01:18:32.996964] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.009010] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.009031] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.021026] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.021041] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.033060] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.033076] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.045091] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.045102] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.057122] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.057133] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.069157] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.069168] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.081194] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.081209] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.093226] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.093239] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.424 [2024-05-15 01:18:33.105256] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.424 [2024-05-15 01:18:33.105267] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.117302] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.117315] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.129337] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.129353] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.141378] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.141390] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.153412] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.153424] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.165446] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.165459] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.177481] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.177497] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.189515] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.189528] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.201547] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.201559] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.213583] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.213595] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.225613] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.225625] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.237655] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.237674] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 Running I/O for 5 seconds... 00:15:57.683 [2024-05-15 01:18:33.262219] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.262241] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.277637] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.277658] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.291476] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.291497] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.305178] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.305204] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.319218] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.319238] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.332604] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.332624] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.346032] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.346052] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.359852] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.359872] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.683 [2024-05-15 01:18:33.373455] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.683 [2024-05-15 01:18:33.373475] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.954 [2024-05-15 01:18:33.386964] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.954 [2024-05-15 01:18:33.386985] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.954 [2024-05-15 01:18:33.401004] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.954 [2024-05-15 01:18:33.401025] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.954 [2024-05-15 01:18:33.414300] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.954 [2024-05-15 01:18:33.414320] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.954 [2024-05-15 01:18:33.427514] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.954 [2024-05-15 01:18:33.427535] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.954 [2024-05-15 01:18:33.441230] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.954 [2024-05-15 01:18:33.441252] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.954 [2024-05-15 01:18:33.454458] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.954 [2024-05-15 01:18:33.454479] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.468297] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.468318] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.482111] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.482132] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.493082] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.493102] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.507077] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.507098] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.520478] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.520499] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.533917] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.533938] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.547632] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.547652] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.561209] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.561229] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.575082] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.575103] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.588364] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.588384] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.602077] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.602098] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.615622] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.615643] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:57.955 [2024-05-15 01:18:33.628788] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:57.955 [2024-05-15 01:18:33.628809] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.642605] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.642626] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.656344] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.656369] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.669753] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.669774] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.683305] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.683326] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.696899] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.696919] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.710810] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.710829] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.731799] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.731819] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.747164] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.747183] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.760473] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.760494] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.775794] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.775814] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.790112] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.790132] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.804503] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.804522] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.823700] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.823720] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.838923] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.838943] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.855278] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.855298] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.870140] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.870161] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.884346] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.884366] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.216 [2024-05-15 01:18:33.899555] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.216 [2024-05-15 01:18:33.899574] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.914000] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.914021] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.927202] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.927222] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.941916] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.941940] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.956030] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.956050] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.970236] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.970256] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.982366] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.982386] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:33.995500] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:33.995520] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.009197] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.009217] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.022417] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.022436] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.040388] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.040409] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.054626] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.054646] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.068471] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.068492] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.082183] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.082208] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.096148] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.096167] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.111735] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.111756] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.125768] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.125788] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.136467] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.136488] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.150357] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.150377] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.475 [2024-05-15 01:18:34.165514] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.475 [2024-05-15 01:18:34.165534] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.180491] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.180511] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.194081] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.194101] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.207474] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.207501] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.221295] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.221314] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.236439] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.236459] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.250107] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.250127] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.263537] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.263559] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.276773] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.276793] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.290240] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.290261] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.303871] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.303891] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.317285] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.317305] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.330315] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.330334] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.343736] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.343757] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.357305] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.357326] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.370266] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.370285] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.383575] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.383595] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.396733] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.396753] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.410501] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.410522] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.734 [2024-05-15 01:18:34.424028] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.734 [2024-05-15 01:18:34.424048] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.437409] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.437429] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.450836] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.450856] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.464210] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.464234] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.477603] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.477623] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.490799] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.490819] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.504362] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.504382] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.518108] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.518128] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.531639] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.531658] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.993 [2024-05-15 01:18:34.545059] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.993 [2024-05-15 01:18:34.545079] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.558646] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.558666] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.572378] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.572398] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.585711] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.585731] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.599189] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.599215] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.612429] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.612449] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.626041] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.626068] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.639706] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.639726] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.653073] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.653094] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.667187] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.667213] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.994 [2024-05-15 01:18:34.681116] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:58.994 [2024-05-15 01:18:34.681137] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.692870] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.692891] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.707021] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.707042] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.717956] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.717977] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.731787] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.731808] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.745858] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.745878] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.759652] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.759674] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.770651] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.770671] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.784412] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.784433] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.798011] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.798032] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.811507] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.811527] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.825127] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.825148] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.838272] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.838292] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.851687] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.851708] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.865088] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.865108] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.878167] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.878187] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.891975] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.891994] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.905790] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.905811] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.919319] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.919339] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.252 [2024-05-15 01:18:34.933123] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.252 [2024-05-15 01:18:34.933144] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:34.944451] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:34.944471] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:34.958114] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:34.958134] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:34.971367] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:34.971387] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:34.985113] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:34.985133] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:34.996063] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:34.996083] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:35.010016] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:35.010038] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:35.023782] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:35.023803] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:35.037047] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:35.037067] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:35.050391] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:35.050412] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:35.063910] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:35.063931] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.511 [2024-05-15 01:18:35.077457] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.511 [2024-05-15 01:18:35.077478] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.090881] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.090902] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.104539] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.104559] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.118132] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.118152] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.131407] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.131428] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.144653] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.144674] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.157871] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.157891] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.171377] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.171398] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.184896] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.184916] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.512 [2024-05-15 01:18:35.198579] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.512 [2024-05-15 01:18:35.198599] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.212229] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.212250] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.225954] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.225975] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.239429] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.239449] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.253248] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.253269] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.268485] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.268505] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.282555] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.282575] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.296694] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.296714] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.308244] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.308264] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.322878] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.322897] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.338949] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.338969] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.348361] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.348380] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.363082] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.363102] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.378019] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.378038] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.393366] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.393386] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.408963] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.408982] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.424002] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.424022] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.437320] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.437342] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:59.771 [2024-05-15 01:18:35.451669] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:59.771 [2024-05-15 01:18:35.451690] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.466889] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.466909] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.486094] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.486114] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.500729] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.500750] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.515032] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.515051] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.536163] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.536182] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.551572] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.551592] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.563629] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.563648] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.577062] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.577082] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.590750] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.590770] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.602923] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.602943] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.617182] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.617208] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.632091] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.632111] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.646373] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.646392] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.658537] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.658558] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.676020] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.676040] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.689507] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.689528] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.702806] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.702827] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.030 [2024-05-15 01:18:35.716437] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.030 [2024-05-15 01:18:35.716473] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.730308] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.730328] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.743277] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.743297] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.760335] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.760358] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.774927] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.774947] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.786913] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.786932] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.800584] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.800604] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.813647] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.813667] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.827835] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.827855] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.842951] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.842971] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.858414] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.858434] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.873370] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.873389] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.887715] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.887735] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.898796] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.898816] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.913140] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.913160] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.928624] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.928644] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.942726] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.942746] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.956448] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.956468] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.289 [2024-05-15 01:18:35.969863] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.289 [2024-05-15 01:18:35.969884] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.548 [2024-05-15 01:18:35.983391] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:35.983411] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:35.996785] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:35.996805] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.011139] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.011159] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.026794] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.026817] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.040368] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.040389] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.054730] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.054750] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.069752] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.069772] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.082986] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.083005] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.096318] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.096337] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.109486] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.109506] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.124019] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.124038] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.139055] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.139075] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.153028] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.153048] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.166451] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.166472] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.179465] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.179485] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.192972] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.192993] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.206308] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.206329] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.219360] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.219381] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.549 [2024-05-15 01:18:36.232382] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.549 [2024-05-15 01:18:36.232403] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.245751] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.245771] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.259136] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.259157] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.273098] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.273118] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.286879] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.286905] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.300635] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.300656] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.314371] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.314392] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.326326] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.808 [2024-05-15 01:18:36.326347] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.808 [2024-05-15 01:18:36.339601] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.339622] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.353558] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.353578] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.366771] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.366792] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.381059] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.381081] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.395298] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.395319] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.406278] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.406298] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.419850] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.419870] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.432850] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.432870] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.446349] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.446369] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.459867] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.459888] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.473584] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.473605] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.485523] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.485543] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.809 [2024-05-15 01:18:36.498640] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:00.809 [2024-05-15 01:18:36.498660] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.512179] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.512205] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.525561] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.525582] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.538674] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.538699] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.552250] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.552272] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.565462] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.565483] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.579601] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.579621] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.594975] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.594995] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.608669] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.608689] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.621861] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.621881] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.635030] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.635050] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.648272] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.648292] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.661482] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.661502] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.675061] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.675082] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.688741] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.688761] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.701966] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.701986] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.715491] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.715511] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.728906] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.728926] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.742750] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.742770] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.068 [2024-05-15 01:18:36.756261] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.068 [2024-05-15 01:18:36.756281] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.769666] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.769686] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.783437] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.783458] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.796858] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.796878] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.810616] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.810636] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.823977] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.823996] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.837139] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.837158] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.850953] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.850973] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.864189] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.864216] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.877415] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.877434] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.891002] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.891022] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.904112] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.904132] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.917777] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.917797] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.931016] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.931036] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.944247] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.944266] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.957831] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.957851] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.971020] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.971040] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.984062] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.984082] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.328 [2024-05-15 01:18:36.997649] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.328 [2024-05-15 01:18:36.997669] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.329 [2024-05-15 01:18:37.010950] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.329 [2024-05-15 01:18:37.010970] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.024709] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.024728] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.037946] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.037966] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.055060] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.055080] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.069325] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.069346] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.081468] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.081487] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.095039] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.095058] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.108621] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.108641] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.122090] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.122110] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.136006] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.136026] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.149837] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.149857] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.164095] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.164114] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.175484] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.175504] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.189726] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.189746] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.203345] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.203364] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.217347] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.217368] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.228849] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.228868] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.248551] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.248571] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.263678] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.263698] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.588 [2024-05-15 01:18:37.279038] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.588 [2024-05-15 01:18:37.279058] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.293335] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.293355] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.309146] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.309166] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.326482] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.326502] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.340335] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.340355] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.354413] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.354433] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.365883] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.365904] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.380040] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.380060] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.393804] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.393823] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.407272] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.407292] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.421579] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.421600] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.436482] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.436502] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.450442] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.450461] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.464200] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.464219] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.477693] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.477715] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.491286] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.491308] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.504478] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.504498] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.518112] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.518132] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:01.848 [2024-05-15 01:18:37.531689] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:01.848 [2024-05-15 01:18:37.531709] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.544872] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.544892] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.559329] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.559348] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.574727] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.574747] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.588572] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.588592] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.605295] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.605315] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.619933] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.619954] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.633322] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.633342] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.647408] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.647427] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.664552] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.664571] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.676769] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.676789] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.691167] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.691187] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.706337] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.706357] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.719584] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.719604] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.735092] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.735112] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.749950] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.107 [2024-05-15 01:18:37.749970] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.107 [2024-05-15 01:18:37.762160] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.108 [2024-05-15 01:18:37.762180] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.108 [2024-05-15 01:18:37.776056] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.108 [2024-05-15 01:18:37.776077] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.108 [2024-05-15 01:18:37.787278] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.108 [2024-05-15 01:18:37.787297] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.800029] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.800050] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.814045] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.814066] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.827767] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.827787] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.841776] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.841801] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.853501] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.853523] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.867059] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.867080] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.880956] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.880978] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.894230] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.894250] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.907891] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.907911] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.921301] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.921321] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.935025] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.935045] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.948521] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.948541] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.961878] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.961899] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.975736] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.975756] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:37.989366] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:37.989387] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:38.002588] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:38.002608] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:38.016221] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:38.016242] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:38.030149] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:38.030169] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:38.043775] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:38.043795] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.367 [2024-05-15 01:18:38.057326] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.367 [2024-05-15 01:18:38.057346] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.070985] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.071006] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.084404] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.084425] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.097702] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.097729] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.111342] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.111364] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.124885] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.124907] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.137942] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.137963] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.151555] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.151576] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.164818] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.164839] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.178061] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.178081] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.191172] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.191200] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.204219] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.204238] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.626 [2024-05-15 01:18:38.217358] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.626 [2024-05-15 01:18:38.217379] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.230678] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.230698] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.244217] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.244237] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.256769] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.256790] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 00:16:02.627 Latency(us) 00:16:02.627 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.627 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:02.627 Nvme1n1 : 5.01 17113.38 133.70 0.00 0.00 7472.22 2149.58 30618.42 00:16:02.627 =================================================================================================================== 00:16:02.627 Total : 17113.38 133.70 0.00 0.00 7472.22 2149.58 30618.42 00:16:02.627 [2024-05-15 01:18:38.266315] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.266334] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.278344] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.278360] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.290391] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.290411] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.302418] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.302443] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.627 [2024-05-15 01:18:38.314446] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.627 [2024-05-15 01:18:38.314459] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.326471] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.326488] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.338504] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.338519] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.350536] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.350549] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.362569] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.362583] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.374598] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.374609] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.386632] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.386643] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.398667] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.398679] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.410695] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.410706] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.422724] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.422734] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.434758] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.434772] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.446789] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.446800] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 [2024-05-15 01:18:38.458822] subsystem.c:2018:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:02.886 [2024-05-15 01:18:38.458833] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:02.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (4079968) - No such process 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 4079968 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.886 delay0 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.886 01:18:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:02.886 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.886 [2024-05-15 01:18:38.547485] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:09.486 Initializing NVMe Controllers 00:16:09.486 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:09.486 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:09.486 Initialization complete. Launching workers. 00:16:09.487 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 106 00:16:09.487 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 395, failed to submit 31 00:16:09.487 success 237, unsuccess 158, failed 0 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:09.487 rmmod nvme_tcp 00:16:09.487 rmmod nvme_fabrics 00:16:09.487 rmmod nvme_keyring 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 4078081 ']' 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 4078081 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 4078081 ']' 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 4078081 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4078081 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4078081' 00:16:09.487 killing process with pid 4078081 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 4078081 00:16:09.487 [2024-05-15 01:18:44.899369] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:09.487 01:18:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 4078081 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.487 01:18:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.029 01:18:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:12.029 00:16:12.029 real 0m32.964s 00:16:12.029 user 0m42.317s 00:16:12.029 sys 0m13.326s 00:16:12.029 01:18:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:12.029 01:18:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:12.029 ************************************ 00:16:12.029 END TEST nvmf_zcopy 00:16:12.029 ************************************ 00:16:12.029 01:18:47 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:12.029 01:18:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:12.029 01:18:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:12.029 01:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:12.029 ************************************ 00:16:12.029 START TEST nvmf_nmic 00:16:12.029 ************************************ 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:12.029 * Looking for test storage... 00:16:12.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:12.029 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:12.030 01:18:47 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:18.627 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:18.627 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:18.627 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:18.627 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:18.627 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:18.627 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:18.628 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:18.628 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:18.628 Found net devices under 0000:af:00.0: cvl_0_0 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:18.628 Found net devices under 0000:af:00.1: cvl_0_1 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:18.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:18.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:16:18.628 00:16:18.628 --- 10.0.0.2 ping statistics --- 00:16:18.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.628 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:18.628 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:18.628 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:16:18.628 00:16:18.628 --- 10.0.0.1 ping statistics --- 00:16:18.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:18.628 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=4085572 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 4085572 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 4085572 ']' 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:18.628 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:18.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:18.629 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:18.629 01:18:53 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:18.629 01:18:53 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:18.629 [2024-05-15 01:18:53.773896] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:16:18.629 [2024-05-15 01:18:53.773942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.629 EAL: No free 2048 kB hugepages reported on node 1 00:16:18.629 [2024-05-15 01:18:53.846301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:18.629 [2024-05-15 01:18:53.921001] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.629 [2024-05-15 01:18:53.921039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.629 [2024-05-15 01:18:53.921048] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.629 [2024-05-15 01:18:53.921057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.629 [2024-05-15 01:18:53.921064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.629 [2024-05-15 01:18:53.921110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.629 [2024-05-15 01:18:53.921130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.629 [2024-05-15 01:18:53.921218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:18.629 [2024-05-15 01:18:53.921220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.888 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:18.888 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:16:18.888 01:18:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:18.888 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:18.888 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 [2024-05-15 01:18:54.623045] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 Malloc0 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 [2024-05-15 01:18:54.677379] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:19.148 [2024-05-15 01:18:54.677638] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:19.148 test case1: single bdev can't be used in multiple subsystems 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 [2024-05-15 01:18:54.701478] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:19.148 [2024-05-15 01:18:54.701497] subsystem.c:2052:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:19.148 [2024-05-15 01:18:54.701507] nvmf_rpc.c:1535:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:19.148 request: 00:16:19.148 { 00:16:19.148 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:19.148 "namespace": { 00:16:19.148 "bdev_name": "Malloc0", 00:16:19.148 "no_auto_visible": false 00:16:19.148 }, 00:16:19.148 "method": "nvmf_subsystem_add_ns", 00:16:19.148 "req_id": 1 00:16:19.148 } 00:16:19.148 Got JSON-RPC error response 00:16:19.148 response: 00:16:19.148 { 00:16:19.148 "code": -32602, 00:16:19.148 "message": "Invalid parameters" 00:16:19.148 } 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:19.148 Adding namespace failed - expected result. 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:19.148 test case2: host connect to nvmf target in multiple paths 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:19.148 [2024-05-15 01:18:54.717649] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.148 01:18:54 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:20.526 01:18:56 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:21.902 01:18:57 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:21.902 01:18:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:16:21.902 01:18:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.902 01:18:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:16:21.902 01:18:57 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:16:23.807 01:18:59 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:23.807 [global] 00:16:23.807 thread=1 00:16:23.807 invalidate=1 00:16:23.807 rw=write 00:16:23.807 time_based=1 00:16:23.807 runtime=1 00:16:23.807 ioengine=libaio 00:16:23.807 direct=1 00:16:23.807 bs=4096 00:16:23.807 iodepth=1 00:16:23.807 norandommap=0 00:16:23.807 numjobs=1 00:16:23.807 00:16:23.807 verify_dump=1 00:16:23.807 verify_backlog=512 00:16:23.807 verify_state_save=0 00:16:23.807 do_verify=1 00:16:23.807 verify=crc32c-intel 00:16:23.807 [job0] 00:16:23.807 filename=/dev/nvme0n1 00:16:24.067 Could not set queue depth (nvme0n1) 00:16:24.326 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:24.326 fio-3.35 00:16:24.326 Starting 1 thread 00:16:25.263 00:16:25.263 job0: (groupid=0, jobs=1): err= 0: pid=4086764: Wed May 15 01:19:00 2024 00:16:25.263 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:25.263 slat (nsec): min=8825, max=45690, avg=9930.66, stdev=2280.03 00:16:25.263 clat (usec): min=337, max=1133, avg=575.93, stdev=115.98 00:16:25.263 lat (usec): min=347, max=1162, avg=585.86, stdev=116.37 00:16:25.263 clat percentiles (usec): 00:16:25.263 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 416], 20.00th=[ 502], 00:16:25.263 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 570], 60.00th=[ 578], 00:16:25.263 | 70.00th=[ 594], 80.00th=[ 652], 90.00th=[ 758], 95.00th=[ 799], 00:16:25.263 | 99.00th=[ 824], 99.50th=[ 865], 99.90th=[ 1037], 99.95th=[ 1139], 00:16:25.263 | 99.99th=[ 1139] 00:16:25.263 write: IOPS=1373, BW=5495KiB/s (5626kB/s)(5500KiB/1001msec); 0 zone resets 00:16:25.263 slat (usec): min=11, max=27663, avg=33.34, stdev=745.67 00:16:25.263 clat (usec): min=186, max=617, avg=252.74, stdev=62.41 00:16:25.263 lat (usec): min=198, max=28273, avg=286.07, stdev=757.84 00:16:25.263 clat percentiles (usec): 00:16:25.263 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 202], 20.00th=[ 215], 00:16:25.263 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 231], 60.00th=[ 239], 00:16:25.263 | 70.00th=[ 255], 80.00th=[ 281], 90.00th=[ 330], 95.00th=[ 408], 00:16:25.263 | 99.00th=[ 469], 99.50th=[ 474], 99.90th=[ 611], 99.95th=[ 619], 00:16:25.263 | 99.99th=[ 619] 00:16:25.263 bw ( KiB/s): min= 5024, max= 5024, per=91.44%, avg=5024.00, stdev= 0.00, samples=1 00:16:25.263 iops : min= 1256, max= 1256, avg=1256.00, stdev= 0.00, samples=1 00:16:25.263 lat (usec) : 250=39.35%, 500=26.34%, 750=29.72%, 1000=4.50% 00:16:25.263 lat (msec) : 2=0.08% 00:16:25.263 cpu : usr=2.40%, sys=3.30%, ctx=2404, majf=0, minf=2 00:16:25.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:25.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:25.263 issued rwts: total=1024,1375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:25.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:25.263 00:16:25.263 Run status group 0 (all jobs): 00:16:25.263 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:16:25.263 WRITE: bw=5495KiB/s (5626kB/s), 5495KiB/s-5495KiB/s (5626kB/s-5626kB/s), io=5500KiB (5632kB), run=1001-1001msec 00:16:25.263 00:16:25.263 Disk stats (read/write): 00:16:25.263 nvme0n1: ios=1024/1024, merge=0/0, ticks=1525/255, in_queue=1780, util=98.90% 00:16:25.263 01:19:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:25.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:25.521 01:19:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:25.521 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.522 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.780 rmmod nvme_tcp 00:16:25.780 rmmod nvme_fabrics 00:16:25.780 rmmod nvme_keyring 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 4085572 ']' 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 4085572 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 4085572 ']' 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 4085572 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4085572 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4085572' 00:16:25.780 killing process with pid 4085572 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 4085572 00:16:25.780 [2024-05-15 01:19:01.316717] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:25.780 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 4085572 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.040 01:19:01 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.982 01:19:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:27.982 00:16:27.982 real 0m16.344s 00:16:27.982 user 0m40.359s 00:16:27.982 sys 0m5.975s 00:16:27.982 01:19:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:27.982 01:19:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:27.982 ************************************ 00:16:27.982 END TEST nvmf_nmic 00:16:27.982 ************************************ 00:16:28.262 01:19:03 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:28.262 01:19:03 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:28.262 01:19:03 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:28.262 01:19:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:28.262 ************************************ 00:16:28.262 START TEST nvmf_fio_target 00:16:28.262 ************************************ 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:28.262 * Looking for test storage... 00:16:28.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.262 01:19:03 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:28.263 01:19:03 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.830 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.830 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.831 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.831 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.831 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:34.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:16:34.831 00:16:34.831 --- 10.0.0.2 ping statistics --- 00:16:34.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.831 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:16:34.831 00:16:34.831 --- 10.0.0.1 ping statistics --- 00:16:34.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.831 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:34.831 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.090 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=4090728 00:16:35.090 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:35.090 01:19:10 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 4090728 00:16:35.090 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 4090728 ']' 00:16:35.091 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.091 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:35.091 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.091 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:35.091 01:19:10 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.091 [2024-05-15 01:19:10.570234] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:16:35.091 [2024-05-15 01:19:10.570281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.091 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.091 [2024-05-15 01:19:10.644097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.091 [2024-05-15 01:19:10.719893] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.091 [2024-05-15 01:19:10.719929] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.091 [2024-05-15 01:19:10.719938] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.091 [2024-05-15 01:19:10.719947] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.091 [2024-05-15 01:19:10.719955] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.091 [2024-05-15 01:19:10.720004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.091 [2024-05-15 01:19:10.720020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.091 [2024-05-15 01:19:10.720106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.091 [2024-05-15 01:19:10.720108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.027 [2024-05-15 01:19:11.585622] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.027 01:19:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.286 01:19:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:36.286 01:19:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.545 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:36.545 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.545 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:36.545 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:36.802 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:36.802 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:37.060 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.319 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:37.319 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.319 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:37.319 01:19:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.577 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:37.577 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:37.836 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:38.095 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:38.095 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:38.095 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:38.095 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:38.355 01:19:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:38.355 [2024-05-15 01:19:14.041740] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:38.355 [2024-05-15 01:19:14.042037] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.614 01:19:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:38.614 01:19:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:38.873 01:19:14 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.250 01:19:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:40.250 01:19:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:40.250 01:19:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.250 01:19:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:40.250 01:19:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:40.250 01:19:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:42.154 01:19:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:42.154 [global] 00:16:42.154 thread=1 00:16:42.154 invalidate=1 00:16:42.154 rw=write 00:16:42.154 time_based=1 00:16:42.154 runtime=1 00:16:42.154 ioengine=libaio 00:16:42.154 direct=1 00:16:42.154 bs=4096 00:16:42.154 iodepth=1 00:16:42.154 norandommap=0 00:16:42.154 numjobs=1 00:16:42.154 00:16:42.154 verify_dump=1 00:16:42.154 verify_backlog=512 00:16:42.154 verify_state_save=0 00:16:42.154 do_verify=1 00:16:42.154 verify=crc32c-intel 00:16:42.154 [job0] 00:16:42.154 filename=/dev/nvme0n1 00:16:42.154 [job1] 00:16:42.154 filename=/dev/nvme0n2 00:16:42.154 [job2] 00:16:42.154 filename=/dev/nvme0n3 00:16:42.154 [job3] 00:16:42.154 filename=/dev/nvme0n4 00:16:42.436 Could not set queue depth (nvme0n1) 00:16:42.436 Could not set queue depth (nvme0n2) 00:16:42.436 Could not set queue depth (nvme0n3) 00:16:42.436 Could not set queue depth (nvme0n4) 00:16:42.701 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.701 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.701 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.701 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:42.701 fio-3.35 00:16:42.701 Starting 4 threads 00:16:44.107 00:16:44.107 job0: (groupid=0, jobs=1): err= 0: pid=4092276: Wed May 15 01:19:19 2024 00:16:44.107 read: IOPS=133, BW=534KiB/s (547kB/s)(536KiB/1004msec) 00:16:44.107 slat (nsec): min=4857, max=25840, avg=8624.35, stdev=6554.16 00:16:44.107 clat (usec): min=428, max=42982, avg=6446.51, stdev=14478.77 00:16:44.107 lat (usec): min=434, max=43007, avg=6455.14, stdev=14484.83 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 461], 5.00th=[ 486], 10.00th=[ 494], 20.00th=[ 510], 00:16:44.107 | 30.00th=[ 523], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 553], 00:16:44.107 | 70.00th=[ 701], 80.00th=[ 840], 90.00th=[41681], 95.00th=[42206], 00:16:44.107 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:44.107 | 99.99th=[42730] 00:16:44.107 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:16:44.107 slat (nsec): min=11299, max=61462, avg=12287.11, stdev=2335.99 00:16:44.107 clat (usec): min=201, max=666, avg=256.29, stdev=66.72 00:16:44.107 lat (usec): min=213, max=727, avg=268.58, stdev=67.40 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 206], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:16:44.107 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:16:44.107 | 70.00th=[ 247], 80.00th=[ 269], 90.00th=[ 367], 95.00th=[ 457], 00:16:44.107 | 99.00th=[ 465], 99.50th=[ 469], 99.90th=[ 668], 99.95th=[ 668], 00:16:44.107 | 99.99th=[ 668] 00:16:44.107 bw ( KiB/s): min= 4096, max= 4096, per=24.94%, avg=4096.00, stdev= 0.00, samples=1 00:16:44.107 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:44.107 lat (usec) : 250=57.12%, 500=24.92%, 750=12.54%, 1000=2.32% 00:16:44.107 lat (msec) : 2=0.15%, 50=2.94% 00:16:44.107 cpu : usr=0.50%, sys=0.60%, ctx=646, majf=0, minf=1 00:16:44.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 issued rwts: total=134,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.107 job1: (groupid=0, jobs=1): err= 0: pid=4092277: Wed May 15 01:19:19 2024 00:16:44.107 read: IOPS=1044, BW=4178KiB/s (4278kB/s)(4316KiB/1033msec) 00:16:44.107 slat (nsec): min=4504, max=60126, avg=10747.87, stdev=1993.99 00:16:44.107 clat (usec): min=383, max=41568, avg=553.02, stdev=1251.06 00:16:44.107 lat (usec): min=394, max=41577, avg=563.77, stdev=1250.99 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 404], 5.00th=[ 465], 10.00th=[ 486], 20.00th=[ 494], 00:16:44.107 | 30.00th=[ 502], 40.00th=[ 506], 50.00th=[ 510], 60.00th=[ 515], 00:16:44.107 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 545], 95.00th=[ 570], 00:16:44.107 | 99.00th=[ 799], 99.50th=[ 807], 99.90th=[ 1565], 99.95th=[41681], 00:16:44.107 | 99.99th=[41681] 00:16:44.107 write: IOPS=1486, BW=5948KiB/s (6090kB/s)(6144KiB/1033msec); 0 zone resets 00:16:44.107 slat (nsec): min=4585, max=73539, avg=14618.82, stdev=2960.74 00:16:44.107 clat (usec): min=181, max=3948, avg=255.53, stdev=108.24 00:16:44.107 lat (usec): min=210, max=3962, avg=270.15, stdev=108.22 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:16:44.107 | 30.00th=[ 225], 40.00th=[ 231], 50.00th=[ 239], 60.00th=[ 247], 00:16:44.107 | 70.00th=[ 265], 80.00th=[ 277], 90.00th=[ 297], 95.00th=[ 330], 00:16:44.107 | 99.00th=[ 465], 99.50th=[ 469], 99.90th=[ 1090], 99.95th=[ 3949], 00:16:44.107 | 99.99th=[ 3949] 00:16:44.107 bw ( KiB/s): min= 5632, max= 6656, per=37.41%, avg=6144.00, stdev=724.08, samples=2 00:16:44.107 iops : min= 1408, max= 1664, avg=1536.00, stdev=181.02, samples=2 00:16:44.107 lat (usec) : 250=36.21%, 500=34.72%, 750=28.49%, 1000=0.38% 00:16:44.107 lat (msec) : 2=0.11%, 4=0.04%, 50=0.04% 00:16:44.107 cpu : usr=1.94%, sys=5.62%, ctx=2616, majf=0, minf=1 00:16:44.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 issued rwts: total=1079,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.107 job2: (groupid=0, jobs=1): err= 0: pid=4092278: Wed May 15 01:19:19 2024 00:16:44.107 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:44.107 slat (nsec): min=4120, max=27787, avg=9065.42, stdev=2159.92 00:16:44.107 clat (usec): min=393, max=2344, avg=599.24, stdev=116.13 00:16:44.107 lat (usec): min=397, max=2353, avg=608.31, stdev=115.86 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 461], 5.00th=[ 519], 10.00th=[ 529], 20.00th=[ 537], 00:16:44.107 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 570], 00:16:44.107 | 70.00th=[ 594], 80.00th=[ 644], 90.00th=[ 742], 95.00th=[ 799], 00:16:44.107 | 99.00th=[ 1045], 99.50th=[ 1074], 99.90th=[ 1582], 99.95th=[ 2343], 00:16:44.107 | 99.99th=[ 2343] 00:16:44.107 write: IOPS=1167, BW=4671KiB/s (4783kB/s)(4676KiB/1001msec); 0 zone resets 00:16:44.107 slat (nsec): min=5435, max=39913, avg=12261.63, stdev=2661.96 00:16:44.107 clat (usec): min=208, max=695, avg=305.23, stdev=76.74 00:16:44.107 lat (usec): min=220, max=735, avg=317.49, stdev=76.12 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:16:44.107 | 30.00th=[ 265], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:16:44.107 | 70.00th=[ 306], 80.00th=[ 347], 90.00th=[ 453], 95.00th=[ 482], 00:16:44.107 | 99.00th=[ 519], 99.50th=[ 586], 99.90th=[ 619], 99.95th=[ 693], 00:16:44.107 | 99.99th=[ 693] 00:16:44.107 bw ( KiB/s): min= 5360, max= 5360, per=32.64%, avg=5360.00, stdev= 0.00, samples=1 00:16:44.107 iops : min= 1340, max= 1340, avg=1340.00, stdev= 0.00, samples=1 00:16:44.107 lat (usec) : 250=13.04%, 500=40.08%, 750=42.64%, 1000=3.60% 00:16:44.107 lat (msec) : 2=0.59%, 4=0.05% 00:16:44.107 cpu : usr=1.90%, sys=3.60%, ctx=2193, majf=0, minf=2 00:16:44.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 issued rwts: total=1024,1169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.107 job3: (groupid=0, jobs=1): err= 0: pid=4092279: Wed May 15 01:19:19 2024 00:16:44.107 read: IOPS=917, BW=3668KiB/s (3756kB/s)(3672KiB/1001msec) 00:16:44.107 slat (nsec): min=3960, max=28029, avg=7087.27, stdev=2525.81 00:16:44.107 clat (usec): min=278, max=42770, avg=709.65, stdev=2122.04 00:16:44.107 lat (usec): min=289, max=42776, avg=716.74, stdev=2122.04 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 297], 5.00th=[ 355], 10.00th=[ 424], 20.00th=[ 469], 00:16:44.107 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:16:44.107 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 799], 95.00th=[ 881], 00:16:44.107 | 99.00th=[ 1057], 99.50th=[ 1156], 99.90th=[42730], 99.95th=[42730], 00:16:44.107 | 99.99th=[42730] 00:16:44.107 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:44.107 slat (nsec): min=5239, max=48312, avg=11183.93, stdev=4228.92 00:16:44.107 clat (usec): min=183, max=724, avg=318.54, stdev=104.89 00:16:44.107 lat (usec): min=195, max=745, avg=329.72, stdev=103.86 00:16:44.107 clat percentiles (usec): 00:16:44.107 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 215], 00:16:44.107 | 30.00th=[ 239], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 334], 00:16:44.107 | 70.00th=[ 379], 80.00th=[ 429], 90.00th=[ 474], 95.00th=[ 502], 00:16:44.107 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 676], 99.95th=[ 725], 00:16:44.107 | 99.99th=[ 725] 00:16:44.107 bw ( KiB/s): min= 4096, max= 4096, per=24.94%, avg=4096.00, stdev= 0.00, samples=1 00:16:44.107 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:44.107 lat (usec) : 250=17.40%, 500=44.70%, 750=31.62%, 1000=5.30% 00:16:44.107 lat (msec) : 2=0.82%, 50=0.15% 00:16:44.107 cpu : usr=1.20%, sys=2.00%, ctx=1946, majf=0, minf=1 00:16:44.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:44.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:44.107 issued rwts: total=918,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:44.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:44.107 00:16:44.107 Run status group 0 (all jobs): 00:16:44.107 READ: bw=11.9MiB/s (12.5MB/s), 534KiB/s-4178KiB/s (547kB/s-4278kB/s), io=12.3MiB (12.9MB), run=1001-1033msec 00:16:44.107 WRITE: bw=16.0MiB/s (16.8MB/s), 2040KiB/s-5948KiB/s (2089kB/s-6090kB/s), io=16.6MiB (17.4MB), run=1001-1033msec 00:16:44.107 00:16:44.107 Disk stats (read/write): 00:16:44.107 nvme0n1: ios=179/512, merge=0/0, ticks=800/127, in_queue=927, util=94.39% 00:16:44.107 nvme0n2: ios=1044/1038, merge=0/0, ticks=547/260, in_queue=807, util=85.35% 00:16:44.107 nvme0n3: ios=895/1024, merge=0/0, ticks=604/302, in_queue=906, util=94.56% 00:16:44.107 nvme0n4: ios=692/1024, merge=0/0, ticks=1420/321, in_queue=1741, util=100.00% 00:16:44.107 01:19:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:44.107 [global] 00:16:44.107 thread=1 00:16:44.107 invalidate=1 00:16:44.107 rw=randwrite 00:16:44.107 time_based=1 00:16:44.107 runtime=1 00:16:44.107 ioengine=libaio 00:16:44.107 direct=1 00:16:44.107 bs=4096 00:16:44.107 iodepth=1 00:16:44.107 norandommap=0 00:16:44.107 numjobs=1 00:16:44.107 00:16:44.107 verify_dump=1 00:16:44.107 verify_backlog=512 00:16:44.107 verify_state_save=0 00:16:44.107 do_verify=1 00:16:44.107 verify=crc32c-intel 00:16:44.107 [job0] 00:16:44.107 filename=/dev/nvme0n1 00:16:44.107 [job1] 00:16:44.107 filename=/dev/nvme0n2 00:16:44.107 [job2] 00:16:44.107 filename=/dev/nvme0n3 00:16:44.107 [job3] 00:16:44.107 filename=/dev/nvme0n4 00:16:44.107 Could not set queue depth (nvme0n1) 00:16:44.107 Could not set queue depth (nvme0n2) 00:16:44.107 Could not set queue depth (nvme0n3) 00:16:44.107 Could not set queue depth (nvme0n4) 00:16:44.365 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.365 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.365 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.366 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:44.366 fio-3.35 00:16:44.366 Starting 4 threads 00:16:45.782 00:16:45.782 job0: (groupid=0, jobs=1): err= 0: pid=4092702: Wed May 15 01:19:21 2024 00:16:45.782 read: IOPS=355, BW=1422KiB/s (1456kB/s)(1472KiB/1035msec) 00:16:45.782 slat (nsec): min=8890, max=26496, avg=10226.86, stdev=3232.14 00:16:45.783 clat (usec): min=411, max=43241, avg=2379.56, stdev=8644.61 00:16:45.783 lat (usec): min=421, max=43265, avg=2389.79, stdev=8647.22 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 416], 5.00th=[ 429], 10.00th=[ 437], 20.00th=[ 445], 00:16:45.783 | 30.00th=[ 457], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 482], 00:16:45.783 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 611], 95.00th=[ 676], 00:16:45.783 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:16:45.783 | 99.99th=[43254] 00:16:45.783 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:16:45.783 slat (nsec): min=11718, max=82109, avg=13334.60, stdev=3871.80 00:16:45.783 clat (usec): min=186, max=1132, avg=284.69, stdev=113.54 00:16:45.783 lat (usec): min=199, max=1214, avg=298.02, stdev=114.82 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 200], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:16:45.783 | 30.00th=[ 235], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:16:45.783 | 70.00th=[ 281], 80.00th=[ 306], 90.00th=[ 367], 95.00th=[ 486], 00:16:45.783 | 99.00th=[ 783], 99.50th=[ 783], 99.90th=[ 1139], 99.95th=[ 1139], 00:16:45.783 | 99.99th=[ 1139] 00:16:45.783 bw ( KiB/s): min= 4096, max= 4096, per=32.88%, avg=4096.00, stdev= 0.00, samples=1 00:16:45.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:45.783 lat (usec) : 250=32.73%, 500=57.05%, 750=6.25%, 1000=1.93% 00:16:45.783 lat (msec) : 2=0.11%, 50=1.93% 00:16:45.783 cpu : usr=1.35%, sys=0.97%, ctx=881, majf=0, minf=1 00:16:45.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 issued rwts: total=368,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.783 job1: (groupid=0, jobs=1): err= 0: pid=4092704: Wed May 15 01:19:21 2024 00:16:45.783 read: IOPS=1018, BW=4076KiB/s (4174kB/s)(4080KiB/1001msec) 00:16:45.783 slat (nsec): min=8548, max=47278, avg=9455.50, stdev=2042.91 00:16:45.783 clat (usec): min=327, max=41188, avg=648.52, stdev=2535.14 00:16:45.783 lat (usec): min=336, max=41211, avg=657.97, stdev=2535.63 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 347], 5.00th=[ 396], 10.00th=[ 441], 20.00th=[ 474], 00:16:45.783 | 30.00th=[ 486], 40.00th=[ 490], 50.00th=[ 494], 60.00th=[ 498], 00:16:45.783 | 70.00th=[ 502], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 578], 00:16:45.783 | 99.00th=[ 668], 99.50th=[ 709], 99.90th=[41157], 99.95th=[41157], 00:16:45.783 | 99.99th=[41157] 00:16:45.783 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:45.783 slat (nsec): min=11705, max=40590, avg=13118.99, stdev=2266.32 00:16:45.783 clat (usec): min=193, max=1918, avg=302.26, stdev=119.26 00:16:45.783 lat (usec): min=205, max=1934, avg=315.37, stdev=119.71 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 231], 00:16:45.783 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 265], 60.00th=[ 297], 00:16:45.783 | 70.00th=[ 322], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 469], 00:16:45.783 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 1680], 99.95th=[ 1926], 00:16:45.783 | 99.99th=[ 1926] 00:16:45.783 bw ( KiB/s): min= 5544, max= 5544, per=44.51%, avg=5544.00, stdev= 0.00, samples=1 00:16:45.783 iops : min= 1386, max= 1386, avg=1386.00, stdev= 0.00, samples=1 00:16:45.783 lat (usec) : 250=22.46%, 500=58.95%, 750=17.42%, 1000=0.88% 00:16:45.783 lat (msec) : 2=0.10%, 50=0.20% 00:16:45.783 cpu : usr=2.50%, sys=3.00%, ctx=2044, majf=0, minf=1 00:16:45.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 issued rwts: total=1020,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.783 job2: (groupid=0, jobs=1): err= 0: pid=4092707: Wed May 15 01:19:21 2024 00:16:45.783 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:45.783 slat (nsec): min=8864, max=61721, avg=14073.84, stdev=7702.63 00:16:45.783 clat (usec): min=422, max=1025, avg=619.70, stdev=103.19 00:16:45.783 lat (usec): min=431, max=1071, avg=633.77, stdev=108.99 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 441], 5.00th=[ 494], 10.00th=[ 515], 20.00th=[ 545], 00:16:45.783 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 594], 00:16:45.783 | 70.00th=[ 660], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 799], 00:16:45.783 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 996], 99.95th=[ 1029], 00:16:45.783 | 99.99th=[ 1029] 00:16:45.783 write: IOPS=1173, BW=4695KiB/s (4808kB/s)(4700KiB/1001msec); 0 zone resets 00:16:45.783 slat (nsec): min=11596, max=65987, avg=12639.74, stdev=2630.60 00:16:45.783 clat (usec): min=198, max=2154, avg=280.89, stdev=101.93 00:16:45.783 lat (usec): min=217, max=2167, avg=293.53, stdev=102.70 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:16:45.783 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 253], 60.00th=[ 269], 00:16:45.783 | 70.00th=[ 289], 80.00th=[ 310], 90.00th=[ 379], 95.00th=[ 457], 00:16:45.783 | 99.00th=[ 594], 99.50th=[ 676], 99.90th=[ 1663], 99.95th=[ 2147], 00:16:45.783 | 99.99th=[ 2147] 00:16:45.783 bw ( KiB/s): min= 5392, max= 5392, per=43.29%, avg=5392.00, stdev= 0.00, samples=1 00:16:45.783 iops : min= 1348, max= 1348, avg=1348.00, stdev= 0.00, samples=1 00:16:45.783 lat (usec) : 250=25.10%, 500=30.88%, 750=34.56%, 1000=9.32% 00:16:45.783 lat (msec) : 2=0.09%, 4=0.05% 00:16:45.783 cpu : usr=1.40%, sys=3.40%, ctx=2199, majf=0, minf=1 00:16:45.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 issued rwts: total=1024,1175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.783 job3: (groupid=0, jobs=1): err= 0: pid=4092708: Wed May 15 01:19:21 2024 00:16:45.783 read: IOPS=20, BW=81.3KiB/s (83.3kB/s)(84.0KiB/1033msec) 00:16:45.783 slat (nsec): min=10197, max=30417, avg=16405.14, stdev=6231.59 00:16:45.783 clat (usec): min=40950, max=42091, avg=41726.58, stdev=413.11 00:16:45.783 lat (usec): min=40962, max=42115, avg=41742.98, stdev=415.39 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:45.783 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:45.783 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:45.783 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:45.783 | 99.99th=[42206] 00:16:45.783 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:16:45.783 slat (nsec): min=12102, max=40594, avg=13606.02, stdev=2095.25 00:16:45.783 clat (usec): min=212, max=674, avg=288.84, stdev=62.60 00:16:45.783 lat (usec): min=225, max=715, avg=302.45, stdev=62.99 00:16:45.783 clat percentiles (usec): 00:16:45.783 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 233], 20.00th=[ 241], 00:16:45.783 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:16:45.783 | 70.00th=[ 293], 80.00th=[ 322], 90.00th=[ 375], 95.00th=[ 457], 00:16:45.783 | 99.00th=[ 469], 99.50th=[ 474], 99.90th=[ 676], 99.95th=[ 676], 00:16:45.783 | 99.99th=[ 676] 00:16:45.783 bw ( KiB/s): min= 4096, max= 4096, per=32.88%, avg=4096.00, stdev= 0.00, samples=1 00:16:45.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:45.783 lat (usec) : 250=26.64%, 500=69.23%, 750=0.19% 00:16:45.783 lat (msec) : 50=3.94% 00:16:45.783 cpu : usr=0.39%, sys=0.58%, ctx=536, majf=0, minf=2 00:16:45.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:45.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.783 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:45.783 00:16:45.783 Run status group 0 (all jobs): 00:16:45.783 READ: bw=9403KiB/s (9629kB/s), 81.3KiB/s-4092KiB/s (83.3kB/s-4190kB/s), io=9732KiB (9966kB), run=1001-1035msec 00:16:45.783 WRITE: bw=12.2MiB/s (12.8MB/s), 1979KiB/s-4695KiB/s (2026kB/s-4808kB/s), io=12.6MiB (13.2MB), run=1001-1035msec 00:16:45.783 00:16:45.783 Disk stats (read/write): 00:16:45.783 nvme0n1: ios=65/512, merge=0/0, ticks=752/146, in_queue=898, util=92.18% 00:16:45.783 nvme0n2: ios=977/1024, merge=0/0, ticks=481/290, in_queue=771, util=83.01% 00:16:45.783 nvme0n3: ios=894/1024, merge=0/0, ticks=618/262, in_queue=880, util=92.43% 00:16:45.783 nvme0n4: ios=37/512, merge=0/0, ticks=1538/144, in_queue=1682, util=98.01% 00:16:45.783 01:19:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:45.783 [global] 00:16:45.783 thread=1 00:16:45.783 invalidate=1 00:16:45.783 rw=write 00:16:45.783 time_based=1 00:16:45.783 runtime=1 00:16:45.783 ioengine=libaio 00:16:45.783 direct=1 00:16:45.783 bs=4096 00:16:45.783 iodepth=128 00:16:45.783 norandommap=0 00:16:45.783 numjobs=1 00:16:45.783 00:16:45.783 verify_dump=1 00:16:45.783 verify_backlog=512 00:16:45.783 verify_state_save=0 00:16:45.783 do_verify=1 00:16:45.783 verify=crc32c-intel 00:16:45.783 [job0] 00:16:45.783 filename=/dev/nvme0n1 00:16:45.783 [job1] 00:16:45.783 filename=/dev/nvme0n2 00:16:45.783 [job2] 00:16:45.783 filename=/dev/nvme0n3 00:16:45.783 [job3] 00:16:45.783 filename=/dev/nvme0n4 00:16:45.783 Could not set queue depth (nvme0n1) 00:16:45.783 Could not set queue depth (nvme0n2) 00:16:45.783 Could not set queue depth (nvme0n3) 00:16:45.783 Could not set queue depth (nvme0n4) 00:16:46.046 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:46.046 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:46.046 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:46.046 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:46.046 fio-3.35 00:16:46.046 Starting 4 threads 00:16:47.480 00:16:47.480 job0: (groupid=0, jobs=1): err= 0: pid=4093129: Wed May 15 01:19:22 2024 00:16:47.480 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:16:47.480 slat (usec): min=2, max=3615, avg=87.35, stdev=411.12 00:16:47.480 clat (usec): min=5862, max=22204, avg=11841.79, stdev=1843.71 00:16:47.480 lat (usec): min=7924, max=22208, avg=11929.14, stdev=1817.86 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10552], 00:16:47.480 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:16:47.480 | 70.00th=[12518], 80.00th=[13173], 90.00th=[13698], 95.00th=[15139], 00:16:47.480 | 99.00th=[18220], 99.50th=[19268], 99.90th=[21627], 99.95th=[21627], 00:16:47.480 | 99.99th=[22152] 00:16:47.480 write: IOPS=5136, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1003msec); 0 zone resets 00:16:47.480 slat (usec): min=2, max=30042, avg=102.83, stdev=567.19 00:16:47.480 clat (usec): min=1669, max=57739, avg=12197.02, stdev=4569.39 00:16:47.480 lat (usec): min=5445, max=57752, avg=12299.86, stdev=4610.66 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:16:47.480 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11207], 60.00th=[11731], 00:16:47.480 | 70.00th=[12256], 80.00th=[13960], 90.00th=[16319], 95.00th=[18220], 00:16:47.480 | 99.00th=[26608], 99.50th=[53216], 99.90th=[56886], 99.95th=[57410], 00:16:47.480 | 99.99th=[57934] 00:16:47.480 bw ( KiB/s): min=20480, max=20480, per=29.21%, avg=20480.00, stdev= 0.00, samples=2 00:16:47.480 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:47.480 lat (msec) : 2=0.01%, 10=17.89%, 20=80.49%, 50=1.29%, 100=0.31% 00:16:47.480 cpu : usr=3.29%, sys=3.99%, ctx=880, majf=0, minf=1 00:16:47.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.480 issued rwts: total=5120,5152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.480 job1: (groupid=0, jobs=1): err= 0: pid=4093130: Wed May 15 01:19:22 2024 00:16:47.480 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:16:47.480 slat (usec): min=2, max=9562, avg=85.38, stdev=519.45 00:16:47.480 clat (usec): min=5760, max=28546, avg=11513.72, stdev=2742.06 00:16:47.480 lat (usec): min=5769, max=35963, avg=11599.11, stdev=2782.62 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 6194], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[ 9896], 00:16:47.480 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11469], 00:16:47.480 | 70.00th=[11994], 80.00th=[12649], 90.00th=[13960], 95.00th=[15664], 00:16:47.480 | 99.00th=[25560], 99.50th=[26084], 99.90th=[28443], 99.95th=[28443], 00:16:47.480 | 99.99th=[28443] 00:16:47.480 write: IOPS=4720, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1010msec); 0 zone resets 00:16:47.480 slat (usec): min=3, max=17088, avg=121.88, stdev=523.73 00:16:47.480 clat (usec): min=4637, max=42440, avg=15626.73, stdev=4916.43 00:16:47.480 lat (usec): min=4645, max=42447, avg=15748.61, stdev=4941.61 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 8291], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11994], 00:16:47.480 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14484], 60.00th=[15270], 00:16:47.480 | 70.00th=[17171], 80.00th=[19530], 90.00th=[21890], 95.00th=[23725], 00:16:47.480 | 99.00th=[33817], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:16:47.480 | 99.99th=[42206] 00:16:47.480 bw ( KiB/s): min=16656, max=20513, per=26.51%, avg=18584.50, stdev=2727.31, samples=2 00:16:47.480 iops : min= 4164, max= 5128, avg=4646.00, stdev=681.65, samples=2 00:16:47.480 lat (msec) : 10=15.08%, 20=74.74%, 50=10.17% 00:16:47.480 cpu : usr=3.17%, sys=4.56%, ctx=791, majf=0, minf=1 00:16:47.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:16:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.480 issued rwts: total=4608,4768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.480 job2: (groupid=0, jobs=1): err= 0: pid=4093131: Wed May 15 01:19:22 2024 00:16:47.480 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:16:47.480 slat (usec): min=2, max=13478, avg=103.66, stdev=724.46 00:16:47.480 clat (usec): min=7469, max=35009, avg=14801.67, stdev=3768.05 00:16:47.480 lat (usec): min=7479, max=35046, avg=14905.32, stdev=3799.97 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 8356], 5.00th=[10814], 10.00th=[11338], 20.00th=[12256], 00:16:47.480 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13566], 60.00th=[14615], 00:16:47.480 | 70.00th=[16057], 80.00th=[17171], 90.00th=[19530], 95.00th=[21627], 00:16:47.480 | 99.00th=[28181], 99.50th=[30802], 99.90th=[32113], 99.95th=[32113], 00:16:47.480 | 99.99th=[34866] 00:16:47.480 write: IOPS=4460, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1004msec); 0 zone resets 00:16:47.480 slat (usec): min=3, max=11256, avg=120.05, stdev=644.93 00:16:47.480 clat (usec): min=2515, max=52025, avg=14677.81, stdev=7553.94 00:16:47.480 lat (usec): min=2531, max=52032, avg=14797.86, stdev=7592.98 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 6652], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 9896], 00:16:47.480 | 30.00th=[10683], 40.00th=[12125], 50.00th=[13042], 60.00th=[14222], 00:16:47.480 | 70.00th=[15664], 80.00th=[17695], 90.00th=[22152], 95.00th=[29492], 00:16:47.480 | 99.00th=[47973], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:16:47.480 | 99.99th=[52167] 00:16:47.480 bw ( KiB/s): min=15152, max=19687, per=24.85%, avg=17419.50, stdev=3206.73, samples=2 00:16:47.480 iops : min= 3788, max= 4921, avg=4354.50, stdev=801.15, samples=2 00:16:47.480 lat (msec) : 4=0.19%, 10=12.70%, 20=76.10%, 50=10.85%, 100=0.16% 00:16:47.480 cpu : usr=5.98%, sys=5.28%, ctx=441, majf=0, minf=1 00:16:47.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.480 issued rwts: total=4096,4478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.480 job3: (groupid=0, jobs=1): err= 0: pid=4093132: Wed May 15 01:19:22 2024 00:16:47.480 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:16:47.480 slat (usec): min=2, max=60210, avg=154.79, stdev=1496.77 00:16:47.480 clat (usec): min=3406, max=79630, avg=22407.80, stdev=14994.88 00:16:47.480 lat (usec): min=3997, max=79636, avg=22562.59, stdev=15052.27 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 6783], 5.00th=[ 8717], 10.00th=[10945], 20.00th=[12780], 00:16:47.480 | 30.00th=[13829], 40.00th=[15008], 50.00th=[16909], 60.00th=[19006], 00:16:47.480 | 70.00th=[22938], 80.00th=[31589], 90.00th=[41681], 95.00th=[48497], 00:16:47.480 | 99.00th=[77071], 99.50th=[77071], 99.90th=[79168], 99.95th=[79168], 00:16:47.480 | 99.99th=[79168] 00:16:47.480 write: IOPS=3280, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1007msec); 0 zone resets 00:16:47.480 slat (usec): min=2, max=27843, avg=138.98, stdev=1022.31 00:16:47.480 clat (usec): min=1384, max=63723, avg=17726.09, stdev=10198.50 00:16:47.480 lat (usec): min=1502, max=63734, avg=17865.07, stdev=10246.95 00:16:47.480 clat percentiles (usec): 00:16:47.480 | 1.00th=[ 5538], 5.00th=[ 7963], 10.00th=[ 8979], 20.00th=[11207], 00:16:47.480 | 30.00th=[12649], 40.00th=[13829], 50.00th=[15008], 60.00th=[16581], 00:16:47.480 | 70.00th=[17957], 80.00th=[20317], 90.00th=[28705], 95.00th=[42206], 00:16:47.480 | 99.00th=[57410], 99.50th=[59507], 99.90th=[63701], 99.95th=[63701], 00:16:47.480 | 99.99th=[63701] 00:16:47.480 bw ( KiB/s): min=12312, max=13112, per=18.13%, avg=12712.00, stdev=565.69, samples=2 00:16:47.480 iops : min= 3078, max= 3278, avg=3178.00, stdev=141.42, samples=2 00:16:47.480 lat (msec) : 2=0.16%, 4=0.24%, 10=9.47%, 20=61.38%, 50=25.07% 00:16:47.480 lat (msec) : 100=3.69% 00:16:47.480 cpu : usr=2.98%, sys=3.68%, ctx=385, majf=0, minf=1 00:16:47.480 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:47.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.480 issued rwts: total=3072,3303,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.480 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.480 00:16:47.480 Run status group 0 (all jobs): 00:16:47.480 READ: bw=65.3MiB/s (68.5MB/s), 11.9MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1003-1010msec 00:16:47.480 WRITE: bw=68.5MiB/s (71.8MB/s), 12.8MiB/s-20.1MiB/s (13.4MB/s-21.0MB/s), io=69.1MiB (72.5MB), run=1003-1010msec 00:16:47.480 00:16:47.480 Disk stats (read/write): 00:16:47.480 nvme0n1: ios=4146/4378, merge=0/0, ticks=12100/13811, in_queue=25911, util=94.29% 00:16:47.480 nvme0n2: ios=3631/3912, merge=0/0, ticks=21145/29008, in_queue=50153, util=89.45% 00:16:47.480 nvme0n3: ios=3621/3742, merge=0/0, ticks=52284/45874, in_queue=98158, util=96.59% 00:16:47.480 nvme0n4: ios=2563/2648, merge=0/0, ticks=52387/36869, in_queue=89256, util=89.09% 00:16:47.480 01:19:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:47.480 [global] 00:16:47.480 thread=1 00:16:47.480 invalidate=1 00:16:47.480 rw=randwrite 00:16:47.480 time_based=1 00:16:47.480 runtime=1 00:16:47.480 ioengine=libaio 00:16:47.480 direct=1 00:16:47.480 bs=4096 00:16:47.480 iodepth=128 00:16:47.480 norandommap=0 00:16:47.480 numjobs=1 00:16:47.480 00:16:47.480 verify_dump=1 00:16:47.480 verify_backlog=512 00:16:47.480 verify_state_save=0 00:16:47.480 do_verify=1 00:16:47.480 verify=crc32c-intel 00:16:47.480 [job0] 00:16:47.480 filename=/dev/nvme0n1 00:16:47.480 [job1] 00:16:47.480 filename=/dev/nvme0n2 00:16:47.480 [job2] 00:16:47.480 filename=/dev/nvme0n3 00:16:47.480 [job3] 00:16:47.480 filename=/dev/nvme0n4 00:16:47.480 Could not set queue depth (nvme0n1) 00:16:47.480 Could not set queue depth (nvme0n2) 00:16:47.480 Could not set queue depth (nvme0n3) 00:16:47.480 Could not set queue depth (nvme0n4) 00:16:47.738 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.738 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.738 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.738 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:47.738 fio-3.35 00:16:47.738 Starting 4 threads 00:16:49.125 00:16:49.125 job0: (groupid=0, jobs=1): err= 0: pid=4093551: Wed May 15 01:19:24 2024 00:16:49.125 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:16:49.125 slat (nsec): min=1867, max=44937k, avg=120019.62, stdev=1108958.32 00:16:49.125 clat (usec): min=3942, max=75434, avg=16373.96, stdev=12707.96 00:16:49.125 lat (usec): min=3950, max=75442, avg=16493.98, stdev=12763.64 00:16:49.125 clat percentiles (usec): 00:16:49.125 | 1.00th=[ 7308], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9896], 00:16:49.125 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11469], 60.00th=[13042], 00:16:49.125 | 70.00th=[15139], 80.00th=[17957], 90.00th=[29230], 95.00th=[51119], 00:16:49.125 | 99.00th=[68682], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:16:49.126 | 99.99th=[74974] 00:16:49.126 write: IOPS=4100, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1003msec); 0 zone resets 00:16:49.126 slat (usec): min=2, max=8448, avg=112.90, stdev=533.40 00:16:49.126 clat (usec): min=1507, max=44409, avg=14525.21, stdev=6475.79 00:16:49.126 lat (usec): min=3653, max=44415, avg=14638.12, stdev=6505.14 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 4555], 5.00th=[ 7111], 10.00th=[ 8455], 20.00th=[ 9634], 00:16:49.126 | 30.00th=[10945], 40.00th=[12125], 50.00th=[13304], 60.00th=[14353], 00:16:49.126 | 70.00th=[15795], 80.00th=[18220], 90.00th=[21890], 95.00th=[27395], 00:16:49.126 | 99.00th=[39060], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:16:49.126 | 99.99th=[44303] 00:16:49.126 bw ( KiB/s): min=13552, max=19216, per=24.13%, avg=16384.00, stdev=4005.05, samples=2 00:16:49.126 iops : min= 3388, max= 4804, avg=4096.00, stdev=1001.26, samples=2 00:16:49.126 lat (msec) : 2=0.01%, 4=0.16%, 10=22.16%, 20=61.54%, 50=13.35% 00:16:49.126 lat (msec) : 100=2.78% 00:16:49.126 cpu : usr=3.99%, sys=3.89%, ctx=479, majf=0, minf=1 00:16:49.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:49.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.126 issued rwts: total=4096,4113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.126 job1: (groupid=0, jobs=1): err= 0: pid=4093552: Wed May 15 01:19:24 2024 00:16:49.126 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:16:49.126 slat (usec): min=2, max=44966, avg=145.79, stdev=1424.37 00:16:49.126 clat (usec): min=1982, max=74555, avg=21627.92, stdev=15708.53 00:16:49.126 lat (usec): min=2002, max=75711, avg=21773.72, stdev=15794.86 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 6783], 5.00th=[ 7832], 10.00th=[ 8979], 20.00th=[10159], 00:16:49.126 | 30.00th=[11338], 40.00th=[12780], 50.00th=[14615], 60.00th=[16909], 00:16:49.126 | 70.00th=[23725], 80.00th=[33162], 90.00th=[45876], 95.00th=[57410], 00:16:49.126 | 99.00th=[65799], 99.50th=[65799], 99.90th=[65799], 99.95th=[67634], 00:16:49.126 | 99.99th=[74974] 00:16:49.126 write: IOPS=3274, BW=12.8MiB/s (13.4MB/s)(12.9MiB/1010msec); 0 zone resets 00:16:49.126 slat (usec): min=2, max=35942, avg=158.17, stdev=1176.63 00:16:49.126 clat (usec): min=8308, max=62412, avg=17391.15, stdev=6628.76 00:16:49.126 lat (usec): min=8315, max=62822, avg=17549.32, stdev=6727.75 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11994], 00:16:49.126 | 30.00th=[13960], 40.00th=[14877], 50.00th=[15533], 60.00th=[17171], 00:16:49.126 | 70.00th=[19530], 80.00th=[21627], 90.00th=[26084], 95.00th=[28705], 00:16:49.126 | 99.00th=[47449], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:16:49.126 | 99.99th=[62653] 00:16:49.126 bw ( KiB/s): min= 9056, max=16384, per=18.73%, avg=12720.00, stdev=5181.68, samples=2 00:16:49.126 iops : min= 2264, max= 4096, avg=3180.00, stdev=1295.42, samples=2 00:16:49.126 lat (msec) : 2=0.03%, 4=0.02%, 10=13.15%, 20=56.36%, 50=25.43% 00:16:49.126 lat (msec) : 100=5.02% 00:16:49.126 cpu : usr=2.68%, sys=2.97%, ctx=437, majf=0, minf=1 00:16:49.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:49.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.126 issued rwts: total=3072,3307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.126 job2: (groupid=0, jobs=1): err= 0: pid=4093553: Wed May 15 01:19:24 2024 00:16:49.126 read: IOPS=4254, BW=16.6MiB/s (17.4MB/s)(16.8MiB/1010msec) 00:16:49.126 slat (nsec): min=1843, max=11397k, avg=99970.14, stdev=658321.86 00:16:49.126 clat (usec): min=1525, max=47669, avg=13556.14, stdev=5007.11 00:16:49.126 lat (usec): min=4337, max=47677, avg=13656.11, stdev=5039.36 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 6456], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[10421], 00:16:49.126 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12387], 00:16:49.126 | 70.00th=[13960], 80.00th=[15664], 90.00th=[20579], 95.00th=[25297], 00:16:49.126 | 99.00th=[28443], 99.50th=[32900], 99.90th=[47449], 99.95th=[47449], 00:16:49.126 | 99.99th=[47449] 00:16:49.126 write: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec); 0 zone resets 00:16:49.126 slat (usec): min=2, max=13111, avg=115.90, stdev=637.04 00:16:49.126 clat (usec): min=1626, max=38167, avg=15099.10, stdev=5801.03 00:16:49.126 lat (usec): min=2704, max=38180, avg=15215.00, stdev=5837.38 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 5800], 5.00th=[ 7701], 10.00th=[ 8586], 20.00th=[10028], 00:16:49.126 | 30.00th=[11207], 40.00th=[12387], 50.00th=[13829], 60.00th=[15270], 00:16:49.126 | 70.00th=[17695], 80.00th=[20317], 90.00th=[23725], 95.00th=[26608], 00:16:49.126 | 99.00th=[28967], 99.50th=[30540], 99.90th=[32637], 99.95th=[33162], 00:16:49.126 | 99.99th=[38011] 00:16:49.126 bw ( KiB/s): min=17144, max=19720, per=27.14%, avg=18432.00, stdev=1821.51, samples=2 00:16:49.126 iops : min= 4286, max= 4930, avg=4608.00, stdev=455.38, samples=2 00:16:49.126 lat (msec) : 2=0.02%, 4=0.33%, 10=17.42%, 20=66.12%, 50=16.11% 00:16:49.126 cpu : usr=3.07%, sys=5.75%, ctx=509, majf=0, minf=1 00:16:49.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:49.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.126 issued rwts: total=4297,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.126 job3: (groupid=0, jobs=1): err= 0: pid=4093554: Wed May 15 01:19:24 2024 00:16:49.126 read: IOPS=4931, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1007msec) 00:16:49.126 slat (nsec): min=1736, max=15287k, avg=96186.82, stdev=748294.21 00:16:49.126 clat (usec): min=2020, max=28287, avg=13942.58, stdev=4213.15 00:16:49.126 lat (usec): min=2034, max=28297, avg=14038.76, stdev=4240.21 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 5932], 5.00th=[ 7898], 10.00th=[ 8979], 20.00th=[10159], 00:16:49.126 | 30.00th=[10945], 40.00th=[12780], 50.00th=[14091], 60.00th=[14746], 00:16:49.126 | 70.00th=[15401], 80.00th=[17171], 90.00th=[20579], 95.00th=[22152], 00:16:49.126 | 99.00th=[25035], 99.50th=[27395], 99.90th=[28181], 99.95th=[28181], 00:16:49.126 | 99.99th=[28181] 00:16:49.126 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:16:49.126 slat (usec): min=2, max=9244, avg=76.16, stdev=476.10 00:16:49.126 clat (usec): min=517, max=29038, avg=11065.61, stdev=4692.32 00:16:49.126 lat (usec): min=524, max=29050, avg=11141.77, stdev=4711.03 00:16:49.126 clat percentiles (usec): 00:16:49.126 | 1.00th=[ 1762], 5.00th=[ 4752], 10.00th=[ 5997], 20.00th=[ 7439], 00:16:49.126 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[10028], 60.00th=[11469], 00:16:49.126 | 70.00th=[12649], 80.00th=[14746], 90.00th=[17433], 95.00th=[20579], 00:16:49.126 | 99.00th=[23987], 99.50th=[25297], 99.90th=[28967], 99.95th=[28967], 00:16:49.126 | 99.99th=[28967] 00:16:49.126 bw ( KiB/s): min=20480, max=20480, per=30.16%, avg=20480.00, stdev= 0.00, samples=2 00:16:49.126 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:49.126 lat (usec) : 750=0.03%, 1000=0.01% 00:16:49.126 lat (msec) : 2=0.63%, 4=0.76%, 10=31.85%, 20=58.23%, 50=8.49% 00:16:49.126 cpu : usr=3.58%, sys=6.06%, ctx=476, majf=0, minf=1 00:16:49.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:49.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.126 issued rwts: total=4966,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.126 00:16:49.126 Run status group 0 (all jobs): 00:16:49.126 READ: bw=63.5MiB/s (66.6MB/s), 11.9MiB/s-19.3MiB/s (12.5MB/s-20.2MB/s), io=64.2MiB (67.3MB), run=1003-1010msec 00:16:49.126 WRITE: bw=66.3MiB/s (69.5MB/s), 12.8MiB/s-19.9MiB/s (13.4MB/s-20.8MB/s), io=67.0MiB (70.2MB), run=1003-1010msec 00:16:49.126 00:16:49.126 Disk stats (read/write): 00:16:49.127 nvme0n1: ios=3122/3368, merge=0/0, ticks=38613/37235, in_queue=75848, util=90.93% 00:16:49.127 nvme0n2: ios=2819/3072, merge=0/0, ticks=22941/22710, in_queue=45651, util=94.61% 00:16:49.127 nvme0n3: ios=3605/3698, merge=0/0, ticks=25112/27817, in_queue=52929, util=97.25% 00:16:49.127 nvme0n4: ios=4153/4165, merge=0/0, ticks=53282/38526, in_queue=91808, util=94.11% 00:16:49.127 01:19:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:49.127 01:19:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=4093820 00:16:49.127 01:19:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:49.127 01:19:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:49.127 [global] 00:16:49.127 thread=1 00:16:49.127 invalidate=1 00:16:49.127 rw=read 00:16:49.127 time_based=1 00:16:49.127 runtime=10 00:16:49.127 ioengine=libaio 00:16:49.127 direct=1 00:16:49.127 bs=4096 00:16:49.127 iodepth=1 00:16:49.127 norandommap=1 00:16:49.127 numjobs=1 00:16:49.127 00:16:49.127 [job0] 00:16:49.127 filename=/dev/nvme0n1 00:16:49.127 [job1] 00:16:49.127 filename=/dev/nvme0n2 00:16:49.127 [job2] 00:16:49.127 filename=/dev/nvme0n3 00:16:49.127 [job3] 00:16:49.127 filename=/dev/nvme0n4 00:16:49.127 Could not set queue depth (nvme0n1) 00:16:49.127 Could not set queue depth (nvme0n2) 00:16:49.127 Could not set queue depth (nvme0n3) 00:16:49.127 Could not set queue depth (nvme0n4) 00:16:49.383 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.383 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.383 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.383 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.383 fio-3.35 00:16:49.383 Starting 4 threads 00:16:51.903 01:19:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:52.159 01:19:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:52.159 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=684032, buflen=4096 00:16:52.159 fio: pid=4093986, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.159 01:19:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.159 01:19:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:52.159 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=278528, buflen=4096 00:16:52.159 fio: pid=4093985, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.415 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=19845120, buflen=4096 00:16:52.415 fio: pid=4093983, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.415 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.415 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:52.851 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.851 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:52.851 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11735040, buflen=4096 00:16:52.851 fio: pid=4093984, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:52.851 00:16:52.851 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4093983: Wed May 15 01:19:28 2024 00:16:52.851 read: IOPS=1619, BW=6475KiB/s (6631kB/s)(18.9MiB/2993msec) 00:16:52.851 slat (usec): min=9, max=15619, avg=16.58, stdev=312.74 00:16:52.851 clat (usec): min=381, max=41989, avg=595.17, stdev=1873.54 00:16:52.851 lat (usec): min=391, max=42014, avg=611.75, stdev=1901.17 00:16:52.851 clat percentiles (usec): 00:16:52.851 | 1.00th=[ 420], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 465], 00:16:52.851 | 30.00th=[ 478], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 506], 00:16:52.851 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 652], 00:16:52.851 | 99.00th=[ 824], 99.50th=[ 971], 99.90th=[41681], 99.95th=[42206], 00:16:52.851 | 99.99th=[42206] 00:16:52.851 bw ( KiB/s): min= 1040, max= 8432, per=64.60%, avg=6372.80, stdev=3014.95, samples=5 00:16:52.851 iops : min= 260, max= 2108, avg=1593.20, stdev=753.74, samples=5 00:16:52.851 lat (usec) : 500=54.35%, 750=43.25%, 1000=1.94% 00:16:52.851 lat (msec) : 2=0.21%, 20=0.02%, 50=0.21% 00:16:52.851 cpu : usr=0.60%, sys=2.14%, ctx=4850, majf=0, minf=1 00:16:52.851 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.851 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.851 issued rwts: total=4846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.851 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.851 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4093984: Wed May 15 01:19:28 2024 00:16:52.851 read: IOPS=889, BW=3557KiB/s (3642kB/s)(11.2MiB/3222msec) 00:16:52.851 slat (usec): min=8, max=16639, avg=21.45, stdev=310.62 00:16:52.851 clat (usec): min=437, max=42913, avg=1092.94, stdev=4279.98 00:16:52.851 lat (usec): min=446, max=58993, avg=1114.37, stdev=4347.76 00:16:52.851 clat percentiles (usec): 00:16:52.851 | 1.00th=[ 461], 5.00th=[ 498], 10.00th=[ 529], 20.00th=[ 545], 00:16:52.851 | 30.00th=[ 553], 40.00th=[ 570], 50.00th=[ 644], 60.00th=[ 676], 00:16:52.851 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 799], 95.00th=[ 857], 00:16:52.851 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:52.851 | 99.99th=[42730] 00:16:52.851 bw ( KiB/s): min= 96, max= 7144, per=38.66%, avg=3813.50, stdev=3052.98, samples=6 00:16:52.851 iops : min= 24, max= 1786, avg=953.33, stdev=763.31, samples=6 00:16:52.851 lat (usec) : 500=5.34%, 750=75.37%, 1000=17.83% 00:16:52.851 lat (msec) : 2=0.35%, 50=1.08% 00:16:52.851 cpu : usr=0.50%, sys=1.77%, ctx=2868, majf=0, minf=1 00:16:52.851 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.851 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.851 issued rwts: total=2866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.851 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.851 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4093985: Wed May 15 01:19:28 2024 00:16:52.851 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(272KiB/2837msec) 00:16:52.851 slat (nsec): min=23424, max=78899, avg=26328.16, stdev=6508.11 00:16:52.851 clat (usec): min=1040, max=43037, avg=41373.11, stdev=4971.15 00:16:52.851 lat (usec): min=1072, max=43063, avg=41399.43, stdev=4970.47 00:16:52.851 clat percentiles (usec): 00:16:52.851 | 1.00th=[ 1045], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:52.851 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:52.851 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:52.851 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:52.851 | 99.99th=[43254] 00:16:52.851 bw ( KiB/s): min= 96, max= 96, per=0.97%, avg=96.00, stdev= 0.00, samples=5 00:16:52.851 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:16:52.851 lat (msec) : 2=1.45%, 50=97.10% 00:16:52.851 cpu : usr=0.11%, sys=0.00%, ctx=71, majf=0, minf=1 00:16:52.851 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.851 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.851 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.851 issued rwts: total=69,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.851 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.851 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=4093986: Wed May 15 01:19:28 2024 00:16:52.851 read: IOPS=63, BW=252KiB/s (258kB/s)(668KiB/2648msec) 00:16:52.851 slat (nsec): min=8987, max=45935, avg=16289.38, stdev=7789.08 00:16:52.851 clat (usec): min=489, max=42019, avg=15713.98, stdev=19645.37 00:16:52.851 lat (usec): min=498, max=42044, avg=15730.21, stdev=19651.40 00:16:52.851 clat percentiles (usec): 00:16:52.851 | 1.00th=[ 502], 5.00th=[ 519], 10.00th=[ 529], 20.00th=[ 562], 00:16:52.851 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 766], 00:16:52.851 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:52.851 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:52.851 | 99.99th=[42206] 00:16:52.851 bw ( KiB/s): min= 96, max= 920, per=2.66%, avg=262.40, stdev=367.63, samples=5 00:16:52.851 iops : min= 24, max= 230, avg=65.60, stdev=91.91, samples=5 00:16:52.851 lat (usec) : 500=1.19%, 750=58.33%, 1000=1.79% 00:16:52.851 lat (msec) : 2=0.60%, 10=0.60%, 50=36.90% 00:16:52.852 cpu : usr=0.08%, sys=0.08%, ctx=168, majf=0, minf=2 00:16:52.852 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:52.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.852 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:52.852 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:52.852 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:52.852 00:16:52.852 Run status group 0 (all jobs): 00:16:52.852 READ: bw=9863KiB/s (10.1MB/s), 95.9KiB/s-6475KiB/s (98.2kB/s-6631kB/s), io=31.0MiB (32.5MB), run=2648-3222msec 00:16:52.852 00:16:52.852 Disk stats (read/write): 00:16:52.852 nvme0n1: ios=4598/0, merge=0/0, ticks=3369/0, in_queue=3369, util=98.26% 00:16:52.852 nvme0n2: ios=2862/0, merge=0/0, ticks=2965/0, in_queue=2965, util=95.07% 00:16:52.852 nvme0n3: ios=117/0, merge=0/0, ticks=3904/0, in_queue=3904, util=99.12% 00:16:52.852 nvme0n4: ios=165/0, merge=0/0, ticks=2539/0, in_queue=2539, util=96.45% 00:16:52.852 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:52.852 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:53.108 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.108 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:53.108 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.108 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:53.364 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:53.364 01:19:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 4093820 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:53.621 nvmf hotplug test: fio failed as expected 00:16:53.621 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.878 rmmod nvme_tcp 00:16:53.878 rmmod nvme_fabrics 00:16:53.878 rmmod nvme_keyring 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 4090728 ']' 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 4090728 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 4090728 ']' 00:16:53.878 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 4090728 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4090728 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4090728' 00:16:54.135 killing process with pid 4090728 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 4090728 00:16:54.135 [2024-05-15 01:19:29.625475] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:54.135 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 4090728 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.392 01:19:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.293 01:19:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:56.293 00:16:56.293 real 0m28.186s 00:16:56.293 user 2m3.032s 00:16:56.293 sys 0m9.888s 00:16:56.293 01:19:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:56.293 01:19:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.293 ************************************ 00:16:56.293 END TEST nvmf_fio_target 00:16:56.293 ************************************ 00:16:56.293 01:19:31 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:56.293 01:19:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:56.293 01:19:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:56.293 01:19:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:56.551 ************************************ 00:16:56.551 START TEST nvmf_bdevio 00:16:56.551 ************************************ 00:16:56.551 01:19:31 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:56.551 * Looking for test storage... 00:16:56.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:16:56.551 01:19:32 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:03.104 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.104 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:03.105 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:03.105 Found net devices under 0000:af:00.0: cvl_0_0 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:03.105 Found net devices under 0000:af:00.1: cvl_0_1 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:03.105 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:03.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:17:03.362 00:17:03.362 --- 10.0.0.2 ping statistics --- 00:17:03.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.362 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:17:03.362 00:17:03.362 --- 10.0.0.1 ping statistics --- 00:17:03.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.362 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:03.362 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=4098499 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 4098499 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 4098499 ']' 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:03.363 01:19:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:03.363 [2024-05-15 01:19:38.931468] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:17:03.363 [2024-05-15 01:19:38.931514] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.363 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.363 [2024-05-15 01:19:39.004090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.620 [2024-05-15 01:19:39.076626] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.620 [2024-05-15 01:19:39.076658] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.620 [2024-05-15 01:19:39.076668] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.620 [2024-05-15 01:19:39.076677] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.620 [2024-05-15 01:19:39.076684] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.620 [2024-05-15 01:19:39.076806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:03.620 [2024-05-15 01:19:39.076917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:03.620 [2024-05-15 01:19:39.077012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:03.620 [2024-05-15 01:19:39.077012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:04.183 [2024-05-15 01:19:39.793078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:04.183 Malloc0 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:04.183 [2024-05-15 01:19:39.839428] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:04.183 [2024-05-15 01:19:39.839688] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:04.183 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:04.183 { 00:17:04.183 "params": { 00:17:04.183 "name": "Nvme$subsystem", 00:17:04.183 "trtype": "$TEST_TRANSPORT", 00:17:04.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:04.183 "adrfam": "ipv4", 00:17:04.183 "trsvcid": "$NVMF_PORT", 00:17:04.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:04.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:04.183 "hdgst": ${hdgst:-false}, 00:17:04.184 "ddgst": ${ddgst:-false} 00:17:04.184 }, 00:17:04.184 "method": "bdev_nvme_attach_controller" 00:17:04.184 } 00:17:04.184 EOF 00:17:04.184 )") 00:17:04.184 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:04.184 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:04.184 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:04.184 01:19:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:04.184 "params": { 00:17:04.184 "name": "Nvme1", 00:17:04.184 "trtype": "tcp", 00:17:04.184 "traddr": "10.0.0.2", 00:17:04.184 "adrfam": "ipv4", 00:17:04.184 "trsvcid": "4420", 00:17:04.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.184 "hdgst": false, 00:17:04.184 "ddgst": false 00:17:04.184 }, 00:17:04.184 "method": "bdev_nvme_attach_controller" 00:17:04.184 }' 00:17:04.440 [2024-05-15 01:19:39.889475] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:17:04.440 [2024-05-15 01:19:39.889524] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4098625 ] 00:17:04.440 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.440 [2024-05-15 01:19:39.961799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:04.440 [2024-05-15 01:19:40.038128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.440 [2024-05-15 01:19:40.038225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.440 [2024-05-15 01:19:40.038228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.697 I/O targets: 00:17:04.697 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:04.697 00:17:04.697 00:17:04.697 CUnit - A unit testing framework for C - Version 2.1-3 00:17:04.697 http://cunit.sourceforge.net/ 00:17:04.697 00:17:04.697 00:17:04.697 Suite: bdevio tests on: Nvme1n1 00:17:04.697 Test: blockdev write read block ...passed 00:17:04.697 Test: blockdev write zeroes read block ...passed 00:17:04.697 Test: blockdev write zeroes read no split ...passed 00:17:04.954 Test: blockdev write zeroes read split ...passed 00:17:04.954 Test: blockdev write zeroes read split partial ...passed 00:17:04.954 Test: blockdev reset ...[2024-05-15 01:19:40.504989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:04.954 [2024-05-15 01:19:40.505054] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dac7b0 (9): Bad file descriptor 00:17:05.210 [2024-05-15 01:19:40.682574] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:05.210 passed 00:17:05.210 Test: blockdev write read 8 blocks ...passed 00:17:05.210 Test: blockdev write read size > 128k ...passed 00:17:05.210 Test: blockdev write read invalid size ...passed 00:17:05.210 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:05.210 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:05.210 Test: blockdev write read max offset ...passed 00:17:05.210 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:05.210 Test: blockdev writev readv 8 blocks ...passed 00:17:05.210 Test: blockdev writev readv 30 x 1block ...passed 00:17:05.210 Test: blockdev writev readv block ...passed 00:17:05.469 Test: blockdev writev readv size > 128k ...passed 00:17:05.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:05.469 Test: blockdev comparev and writev ...[2024-05-15 01:19:40.907424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.907453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.907469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.907480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.907910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.907922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.907936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.907946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.908369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.908382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.908404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.908421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.908860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.908876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.908890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:05.469 [2024-05-15 01:19:40.908900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:05.469 passed 00:17:05.469 Test: blockdev nvme passthru rw ...passed 00:17:05.469 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:19:40.993080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.469 [2024-05-15 01:19:40.993098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.993401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.469 [2024-05-15 01:19:40.993414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.993711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.469 [2024-05-15 01:19:40.993724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:05.469 [2024-05-15 01:19:40.994029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:05.469 [2024-05-15 01:19:40.994042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:05.469 passed 00:17:05.469 Test: blockdev nvme admin passthru ...passed 00:17:05.469 Test: blockdev copy ...passed 00:17:05.469 00:17:05.469 Run Summary: Type Total Ran Passed Failed Inactive 00:17:05.469 suites 1 1 n/a 0 0 00:17:05.469 tests 23 23 23 0 0 00:17:05.469 asserts 152 152 152 0 n/a 00:17:05.469 00:17:05.469 Elapsed time = 1.550 seconds 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:05.753 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.754 rmmod nvme_tcp 00:17:05.754 rmmod nvme_fabrics 00:17:05.754 rmmod nvme_keyring 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 4098499 ']' 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 4098499 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 4098499 ']' 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 4098499 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4098499 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4098499' 00:17:05.754 killing process with pid 4098499 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 4098499 00:17:05.754 [2024-05-15 01:19:41.385341] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:05.754 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 4098499 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.011 01:19:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.547 01:19:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:08.547 00:17:08.547 real 0m11.698s 00:17:08.547 user 0m14.490s 00:17:08.547 sys 0m5.760s 00:17:08.547 01:19:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:08.547 01:19:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:08.547 ************************************ 00:17:08.547 END TEST nvmf_bdevio 00:17:08.547 ************************************ 00:17:08.547 01:19:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:08.547 01:19:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:08.547 01:19:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:08.547 01:19:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:08.547 ************************************ 00:17:08.547 START TEST nvmf_auth_target 00:17:08.547 ************************************ 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:08.547 * Looking for test storage... 00:17:08.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.547 01:19:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:08.548 01:19:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:15.103 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:15.103 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:15.103 Found net devices under 0000:af:00.0: cvl_0_0 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:15.103 Found net devices under 0000:af:00.1: cvl_0_1 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.103 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:15.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:15.103 00:17:15.103 --- 10.0.0.2 ping statistics --- 00:17:15.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.104 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:17:15.104 00:17:15.104 --- 10.0.0.1 ping statistics --- 00:17:15.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.104 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=4102535 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 4102535 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4102535 ']' 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:15.104 01:19:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.036 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.036 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:16.036 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.036 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:16.036 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.036 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=4102776 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6b4f8f4638ee8649be33d1caf3cfecf01f22916a533608e5 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RZi 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6b4f8f4638ee8649be33d1caf3cfecf01f22916a533608e5 0 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6b4f8f4638ee8649be33d1caf3cfecf01f22916a533608e5 0 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6b4f8f4638ee8649be33d1caf3cfecf01f22916a533608e5 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RZi 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RZi 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.RZi 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5f3e13ddb1a3e199d6cc9a984f140910 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ZXb 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5f3e13ddb1a3e199d6cc9a984f140910 1 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5f3e13ddb1a3e199d6cc9a984f140910 1 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5f3e13ddb1a3e199d6cc9a984f140910 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ZXb 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ZXb 00:17:16.037 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.ZXb 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=96265eaf0b3a4627545a0e1c75234cde3af144c19c1177d6 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.oFl 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 96265eaf0b3a4627545a0e1c75234cde3af144c19c1177d6 2 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 96265eaf0b3a4627545a0e1c75234cde3af144c19c1177d6 2 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=96265eaf0b3a4627545a0e1c75234cde3af144c19c1177d6 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.oFl 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.oFl 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.oFl 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ab0b8ed9b6e12f1b43fd9a4f95b480616d527d41a6e899f5811365c5301d7306 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CuZ 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ab0b8ed9b6e12f1b43fd9a4f95b480616d527d41a6e899f5811365c5301d7306 3 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ab0b8ed9b6e12f1b43fd9a4f95b480616d527d41a6e899f5811365c5301d7306 3 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ab0b8ed9b6e12f1b43fd9a4f95b480616d527d41a6e899f5811365c5301d7306 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CuZ 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CuZ 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.CuZ 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 4102535 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4102535 ']' 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.295 01:19:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 4102776 /var/tmp/host.sock 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@827 -- # '[' -z 4102776 ']' 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/host.sock 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:16.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@860 -- # return 0 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RZi 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RZi 00:17:16.553 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RZi 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ZXb 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ZXb 00:17:16.811 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ZXb 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oFl 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oFl 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oFl 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:17:17.068 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CuZ 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CuZ 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CuZ 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.326 01:19:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.583 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:17.584 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:17.841 00:17:17.841 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:17.841 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:17.841 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:18.098 { 00:17:18.098 "cntlid": 1, 00:17:18.098 "qid": 0, 00:17:18.098 "state": "enabled", 00:17:18.098 "listen_address": { 00:17:18.098 "trtype": "TCP", 00:17:18.098 "adrfam": "IPv4", 00:17:18.098 "traddr": "10.0.0.2", 00:17:18.098 "trsvcid": "4420" 00:17:18.098 }, 00:17:18.098 "peer_address": { 00:17:18.098 "trtype": "TCP", 00:17:18.098 "adrfam": "IPv4", 00:17:18.098 "traddr": "10.0.0.1", 00:17:18.098 "trsvcid": "48436" 00:17:18.098 }, 00:17:18.098 "auth": { 00:17:18.098 "state": "completed", 00:17:18.098 "digest": "sha256", 00:17:18.098 "dhgroup": "null" 00:17:18.098 } 00:17:18.098 } 00:17:18.098 ]' 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.098 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.355 01:19:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:18.919 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:19.176 00:17:19.176 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:19.176 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:19.176 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.433 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.433 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.433 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.433 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.433 01:19:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.433 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:19.433 { 00:17:19.433 "cntlid": 3, 00:17:19.433 "qid": 0, 00:17:19.433 "state": "enabled", 00:17:19.434 "listen_address": { 00:17:19.434 "trtype": "TCP", 00:17:19.434 "adrfam": "IPv4", 00:17:19.434 "traddr": "10.0.0.2", 00:17:19.434 "trsvcid": "4420" 00:17:19.434 }, 00:17:19.434 "peer_address": { 00:17:19.434 "trtype": "TCP", 00:17:19.434 "adrfam": "IPv4", 00:17:19.434 "traddr": "10.0.0.1", 00:17:19.434 "trsvcid": "48462" 00:17:19.434 }, 00:17:19.434 "auth": { 00:17:19.434 "state": "completed", 00:17:19.434 "digest": "sha256", 00:17:19.434 "dhgroup": "null" 00:17:19.434 } 00:17:19.434 } 00:17:19.434 ]' 00:17:19.434 01:19:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.434 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.691 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.255 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.512 01:19:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.512 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:20.770 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:20.770 { 00:17:20.770 "cntlid": 5, 00:17:20.770 "qid": 0, 00:17:20.770 "state": "enabled", 00:17:20.770 "listen_address": { 00:17:20.770 "trtype": "TCP", 00:17:20.770 "adrfam": "IPv4", 00:17:20.770 "traddr": "10.0.0.2", 00:17:20.770 "trsvcid": "4420" 00:17:20.770 }, 00:17:20.770 "peer_address": { 00:17:20.770 "trtype": "TCP", 00:17:20.770 "adrfam": "IPv4", 00:17:20.770 "traddr": "10.0.0.1", 00:17:20.770 "trsvcid": "48494" 00:17:20.770 }, 00:17:20.770 "auth": { 00:17:20.770 "state": "completed", 00:17:20.770 "digest": "sha256", 00:17:20.770 "dhgroup": "null" 00:17:20.770 } 00:17:20.770 } 00:17:20.770 ]' 00:17:20.770 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:21.028 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.028 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:21.028 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:21.028 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:21.028 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.028 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.029 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.304 01:19:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.868 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.125 00:17:22.125 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:22.125 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.125 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:22.381 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.381 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.381 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.381 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.381 01:19:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.381 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:22.381 { 00:17:22.381 "cntlid": 7, 00:17:22.381 "qid": 0, 00:17:22.381 "state": "enabled", 00:17:22.381 "listen_address": { 00:17:22.381 "trtype": "TCP", 00:17:22.381 "adrfam": "IPv4", 00:17:22.381 "traddr": "10.0.0.2", 00:17:22.381 "trsvcid": "4420" 00:17:22.381 }, 00:17:22.381 "peer_address": { 00:17:22.381 "trtype": "TCP", 00:17:22.381 "adrfam": "IPv4", 00:17:22.381 "traddr": "10.0.0.1", 00:17:22.381 "trsvcid": "48528" 00:17:22.381 }, 00:17:22.381 "auth": { 00:17:22.381 "state": "completed", 00:17:22.381 "digest": "sha256", 00:17:22.381 "dhgroup": "null" 00:17:22.381 } 00:17:22.381 } 00:17:22.381 ]' 00:17:22.382 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:22.382 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.382 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:22.382 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:22.382 01:19:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:22.382 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.382 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.382 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.638 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.203 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:23.465 01:19:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:23.465 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:23.722 { 00:17:23.722 "cntlid": 9, 00:17:23.722 "qid": 0, 00:17:23.722 "state": "enabled", 00:17:23.722 "listen_address": { 00:17:23.722 "trtype": "TCP", 00:17:23.722 "adrfam": "IPv4", 00:17:23.722 "traddr": "10.0.0.2", 00:17:23.722 "trsvcid": "4420" 00:17:23.722 }, 00:17:23.722 "peer_address": { 00:17:23.722 "trtype": "TCP", 00:17:23.722 "adrfam": "IPv4", 00:17:23.722 "traddr": "10.0.0.1", 00:17:23.722 "trsvcid": "47248" 00:17:23.722 }, 00:17:23.722 "auth": { 00:17:23.722 "state": "completed", 00:17:23.722 "digest": "sha256", 00:17:23.722 "dhgroup": "ffdhe2048" 00:17:23.722 } 00:17:23.722 } 00:17:23.722 ]' 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.722 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:23.979 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.979 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:23.979 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.979 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.979 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.979 01:19:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.543 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:24.800 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:25.057 00:17:25.057 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:25.057 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:25.057 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:25.314 { 00:17:25.314 "cntlid": 11, 00:17:25.314 "qid": 0, 00:17:25.314 "state": "enabled", 00:17:25.314 "listen_address": { 00:17:25.314 "trtype": "TCP", 00:17:25.314 "adrfam": "IPv4", 00:17:25.314 "traddr": "10.0.0.2", 00:17:25.314 "trsvcid": "4420" 00:17:25.314 }, 00:17:25.314 "peer_address": { 00:17:25.314 "trtype": "TCP", 00:17:25.314 "adrfam": "IPv4", 00:17:25.314 "traddr": "10.0.0.1", 00:17:25.314 "trsvcid": "47280" 00:17:25.314 }, 00:17:25.314 "auth": { 00:17:25.314 "state": "completed", 00:17:25.314 "digest": "sha256", 00:17:25.314 "dhgroup": "ffdhe2048" 00:17:25.314 } 00:17:25.314 } 00:17:25.314 ]' 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.314 01:20:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.571 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.135 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.392 01:20:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:26.648 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.648 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:26.648 { 00:17:26.649 "cntlid": 13, 00:17:26.649 "qid": 0, 00:17:26.649 "state": "enabled", 00:17:26.649 "listen_address": { 00:17:26.649 "trtype": "TCP", 00:17:26.649 "adrfam": "IPv4", 00:17:26.649 "traddr": "10.0.0.2", 00:17:26.649 "trsvcid": "4420" 00:17:26.649 }, 00:17:26.649 "peer_address": { 00:17:26.649 "trtype": "TCP", 00:17:26.649 "adrfam": "IPv4", 00:17:26.649 "traddr": "10.0.0.1", 00:17:26.649 "trsvcid": "47304" 00:17:26.649 }, 00:17:26.649 "auth": { 00:17:26.649 "state": "completed", 00:17:26.649 "digest": "sha256", 00:17:26.649 "dhgroup": "ffdhe2048" 00:17:26.649 } 00:17:26.649 } 00:17:26.649 ]' 00:17:26.649 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.905 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.163 01:20:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.727 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:27.988 00:17:27.988 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:27.988 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:27.988 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:28.245 { 00:17:28.245 "cntlid": 15, 00:17:28.245 "qid": 0, 00:17:28.245 "state": "enabled", 00:17:28.245 "listen_address": { 00:17:28.245 "trtype": "TCP", 00:17:28.245 "adrfam": "IPv4", 00:17:28.245 "traddr": "10.0.0.2", 00:17:28.245 "trsvcid": "4420" 00:17:28.245 }, 00:17:28.245 "peer_address": { 00:17:28.245 "trtype": "TCP", 00:17:28.245 "adrfam": "IPv4", 00:17:28.245 "traddr": "10.0.0.1", 00:17:28.245 "trsvcid": "47340" 00:17:28.245 }, 00:17:28.245 "auth": { 00:17:28.245 "state": "completed", 00:17:28.245 "digest": "sha256", 00:17:28.245 "dhgroup": "ffdhe2048" 00:17:28.245 } 00:17:28.245 } 00:17:28.245 ]' 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.245 01:20:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.502 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.066 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:29.323 01:20:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:29.580 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:29.580 { 00:17:29.580 "cntlid": 17, 00:17:29.580 "qid": 0, 00:17:29.580 "state": "enabled", 00:17:29.580 "listen_address": { 00:17:29.580 "trtype": "TCP", 00:17:29.580 "adrfam": "IPv4", 00:17:29.580 "traddr": "10.0.0.2", 00:17:29.580 "trsvcid": "4420" 00:17:29.580 }, 00:17:29.580 "peer_address": { 00:17:29.580 "trtype": "TCP", 00:17:29.580 "adrfam": "IPv4", 00:17:29.580 "traddr": "10.0.0.1", 00:17:29.580 "trsvcid": "47360" 00:17:29.580 }, 00:17:29.580 "auth": { 00:17:29.580 "state": "completed", 00:17:29.580 "digest": "sha256", 00:17:29.580 "dhgroup": "ffdhe3072" 00:17:29.580 } 00:17:29.580 } 00:17:29.580 ]' 00:17:29.580 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.837 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.094 01:20:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:30.657 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.657 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:30.657 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.657 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.657 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:30.658 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:30.915 00:17:30.915 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:30.915 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:30.915 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:31.172 { 00:17:31.172 "cntlid": 19, 00:17:31.172 "qid": 0, 00:17:31.172 "state": "enabled", 00:17:31.172 "listen_address": { 00:17:31.172 "trtype": "TCP", 00:17:31.172 "adrfam": "IPv4", 00:17:31.172 "traddr": "10.0.0.2", 00:17:31.172 "trsvcid": "4420" 00:17:31.172 }, 00:17:31.172 "peer_address": { 00:17:31.172 "trtype": "TCP", 00:17:31.172 "adrfam": "IPv4", 00:17:31.172 "traddr": "10.0.0.1", 00:17:31.172 "trsvcid": "47392" 00:17:31.172 }, 00:17:31.172 "auth": { 00:17:31.172 "state": "completed", 00:17:31.172 "digest": "sha256", 00:17:31.172 "dhgroup": "ffdhe3072" 00:17:31.172 } 00:17:31.172 } 00:17:31.172 ]' 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.172 01:20:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.429 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:31.992 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:32.249 01:20:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:32.506 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.506 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:32.763 { 00:17:32.763 "cntlid": 21, 00:17:32.763 "qid": 0, 00:17:32.763 "state": "enabled", 00:17:32.763 "listen_address": { 00:17:32.763 "trtype": "TCP", 00:17:32.763 "adrfam": "IPv4", 00:17:32.763 "traddr": "10.0.0.2", 00:17:32.763 "trsvcid": "4420" 00:17:32.763 }, 00:17:32.763 "peer_address": { 00:17:32.763 "trtype": "TCP", 00:17:32.763 "adrfam": "IPv4", 00:17:32.763 "traddr": "10.0.0.1", 00:17:32.763 "trsvcid": "43798" 00:17:32.763 }, 00:17:32.763 "auth": { 00:17:32.763 "state": "completed", 00:17:32.763 "digest": "sha256", 00:17:32.763 "dhgroup": "ffdhe3072" 00:17:32.763 } 00:17:32.763 } 00:17:32.763 ]' 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.763 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.020 01:20:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.584 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.840 00:17:33.840 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:33.840 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:33.840 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:34.097 { 00:17:34.097 "cntlid": 23, 00:17:34.097 "qid": 0, 00:17:34.097 "state": "enabled", 00:17:34.097 "listen_address": { 00:17:34.097 "trtype": "TCP", 00:17:34.097 "adrfam": "IPv4", 00:17:34.097 "traddr": "10.0.0.2", 00:17:34.097 "trsvcid": "4420" 00:17:34.097 }, 00:17:34.097 "peer_address": { 00:17:34.097 "trtype": "TCP", 00:17:34.097 "adrfam": "IPv4", 00:17:34.097 "traddr": "10.0.0.1", 00:17:34.097 "trsvcid": "43824" 00:17:34.097 }, 00:17:34.097 "auth": { 00:17:34.097 "state": "completed", 00:17:34.097 "digest": "sha256", 00:17:34.097 "dhgroup": "ffdhe3072" 00:17:34.097 } 00:17:34.097 } 00:17:34.097 ]' 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:34.097 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:34.353 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.353 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.353 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.353 01:20:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:34.923 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.923 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:34.923 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.923 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.923 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.923 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.924 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:34.924 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:34.924 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:35.183 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:35.439 00:17:35.439 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:35.439 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:35.439 01:20:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:35.696 { 00:17:35.696 "cntlid": 25, 00:17:35.696 "qid": 0, 00:17:35.696 "state": "enabled", 00:17:35.696 "listen_address": { 00:17:35.696 "trtype": "TCP", 00:17:35.696 "adrfam": "IPv4", 00:17:35.696 "traddr": "10.0.0.2", 00:17:35.696 "trsvcid": "4420" 00:17:35.696 }, 00:17:35.696 "peer_address": { 00:17:35.696 "trtype": "TCP", 00:17:35.696 "adrfam": "IPv4", 00:17:35.696 "traddr": "10.0.0.1", 00:17:35.696 "trsvcid": "43846" 00:17:35.696 }, 00:17:35.696 "auth": { 00:17:35.696 "state": "completed", 00:17:35.696 "digest": "sha256", 00:17:35.696 "dhgroup": "ffdhe4096" 00:17:35.696 } 00:17:35.696 } 00:17:35.696 ]' 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.696 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.953 01:20:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.548 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:36.548 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:36.806 00:17:36.806 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:36.806 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:36.806 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:37.063 { 00:17:37.063 "cntlid": 27, 00:17:37.063 "qid": 0, 00:17:37.063 "state": "enabled", 00:17:37.063 "listen_address": { 00:17:37.063 "trtype": "TCP", 00:17:37.063 "adrfam": "IPv4", 00:17:37.063 "traddr": "10.0.0.2", 00:17:37.063 "trsvcid": "4420" 00:17:37.063 }, 00:17:37.063 "peer_address": { 00:17:37.063 "trtype": "TCP", 00:17:37.063 "adrfam": "IPv4", 00:17:37.063 "traddr": "10.0.0.1", 00:17:37.063 "trsvcid": "43876" 00:17:37.063 }, 00:17:37.063 "auth": { 00:17:37.063 "state": "completed", 00:17:37.063 "digest": "sha256", 00:17:37.063 "dhgroup": "ffdhe4096" 00:17:37.063 } 00:17:37.063 } 00:17:37.063 ]' 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.063 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:37.320 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.320 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.320 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.320 01:20:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.883 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.140 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:38.398 00:17:38.398 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:38.398 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:38.398 01:20:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:38.655 { 00:17:38.655 "cntlid": 29, 00:17:38.655 "qid": 0, 00:17:38.655 "state": "enabled", 00:17:38.655 "listen_address": { 00:17:38.655 "trtype": "TCP", 00:17:38.655 "adrfam": "IPv4", 00:17:38.655 "traddr": "10.0.0.2", 00:17:38.655 "trsvcid": "4420" 00:17:38.655 }, 00:17:38.655 "peer_address": { 00:17:38.655 "trtype": "TCP", 00:17:38.655 "adrfam": "IPv4", 00:17:38.655 "traddr": "10.0.0.1", 00:17:38.655 "trsvcid": "43916" 00:17:38.655 }, 00:17:38.655 "auth": { 00:17:38.655 "state": "completed", 00:17:38.655 "digest": "sha256", 00:17:38.655 "dhgroup": "ffdhe4096" 00:17:38.655 } 00:17:38.655 } 00:17:38.655 ]' 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:38.655 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:38.656 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:38.656 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.656 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.656 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.912 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:39.476 01:20:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.476 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:39.476 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.476 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.476 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.476 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:39.476 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.477 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.733 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.990 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:39.990 { 00:17:39.990 "cntlid": 31, 00:17:39.990 "qid": 0, 00:17:39.990 "state": "enabled", 00:17:39.990 "listen_address": { 00:17:39.990 "trtype": "TCP", 00:17:39.990 "adrfam": "IPv4", 00:17:39.990 "traddr": "10.0.0.2", 00:17:39.990 "trsvcid": "4420" 00:17:39.990 }, 00:17:39.990 "peer_address": { 00:17:39.990 "trtype": "TCP", 00:17:39.990 "adrfam": "IPv4", 00:17:39.990 "traddr": "10.0.0.1", 00:17:39.990 "trsvcid": "43938" 00:17:39.990 }, 00:17:39.990 "auth": { 00:17:39.990 "state": "completed", 00:17:39.990 "digest": "sha256", 00:17:39.990 "dhgroup": "ffdhe4096" 00:17:39.990 } 00:17:39.990 } 00:17:39.990 ]' 00:17:39.990 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.247 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.504 01:20:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:41.069 01:20:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:41.326 00:17:41.326 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:41.326 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.326 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:41.583 { 00:17:41.583 "cntlid": 33, 00:17:41.583 "qid": 0, 00:17:41.583 "state": "enabled", 00:17:41.583 "listen_address": { 00:17:41.583 "trtype": "TCP", 00:17:41.583 "adrfam": "IPv4", 00:17:41.583 "traddr": "10.0.0.2", 00:17:41.583 "trsvcid": "4420" 00:17:41.583 }, 00:17:41.583 "peer_address": { 00:17:41.583 "trtype": "TCP", 00:17:41.583 "adrfam": "IPv4", 00:17:41.583 "traddr": "10.0.0.1", 00:17:41.583 "trsvcid": "43962" 00:17:41.583 }, 00:17:41.583 "auth": { 00:17:41.583 "state": "completed", 00:17:41.583 "digest": "sha256", 00:17:41.583 "dhgroup": "ffdhe6144" 00:17:41.583 } 00:17:41.583 } 00:17:41.583 ]' 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:41.583 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:41.840 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.840 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:41.840 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.840 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.840 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.840 01:20:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.404 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:42.662 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:42.920 00:17:42.920 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:42.920 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:42.920 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:43.178 { 00:17:43.178 "cntlid": 35, 00:17:43.178 "qid": 0, 00:17:43.178 "state": "enabled", 00:17:43.178 "listen_address": { 00:17:43.178 "trtype": "TCP", 00:17:43.178 "adrfam": "IPv4", 00:17:43.178 "traddr": "10.0.0.2", 00:17:43.178 "trsvcid": "4420" 00:17:43.178 }, 00:17:43.178 "peer_address": { 00:17:43.178 "trtype": "TCP", 00:17:43.178 "adrfam": "IPv4", 00:17:43.178 "traddr": "10.0.0.1", 00:17:43.178 "trsvcid": "39442" 00:17:43.178 }, 00:17:43.178 "auth": { 00:17:43.178 "state": "completed", 00:17:43.178 "digest": "sha256", 00:17:43.178 "dhgroup": "ffdhe6144" 00:17:43.178 } 00:17:43.178 } 00:17:43.178 ]' 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.178 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:43.435 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.435 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.435 01:20:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.435 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.999 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.256 01:20:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:44.513 00:17:44.513 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:44.513 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:44.513 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:44.770 { 00:17:44.770 "cntlid": 37, 00:17:44.770 "qid": 0, 00:17:44.770 "state": "enabled", 00:17:44.770 "listen_address": { 00:17:44.770 "trtype": "TCP", 00:17:44.770 "adrfam": "IPv4", 00:17:44.770 "traddr": "10.0.0.2", 00:17:44.770 "trsvcid": "4420" 00:17:44.770 }, 00:17:44.770 "peer_address": { 00:17:44.770 "trtype": "TCP", 00:17:44.770 "adrfam": "IPv4", 00:17:44.770 "traddr": "10.0.0.1", 00:17:44.770 "trsvcid": "39468" 00:17:44.770 }, 00:17:44.770 "auth": { 00:17:44.770 "state": "completed", 00:17:44.770 "digest": "sha256", 00:17:44.770 "dhgroup": "ffdhe6144" 00:17:44.770 } 00:17:44.770 } 00:17:44.770 ]' 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.770 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.027 01:20:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.590 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.848 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.104 00:17:46.104 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:46.104 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:46.104 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:46.360 { 00:17:46.360 "cntlid": 39, 00:17:46.360 "qid": 0, 00:17:46.360 "state": "enabled", 00:17:46.360 "listen_address": { 00:17:46.360 "trtype": "TCP", 00:17:46.360 "adrfam": "IPv4", 00:17:46.360 "traddr": "10.0.0.2", 00:17:46.360 "trsvcid": "4420" 00:17:46.360 }, 00:17:46.360 "peer_address": { 00:17:46.360 "trtype": "TCP", 00:17:46.360 "adrfam": "IPv4", 00:17:46.360 "traddr": "10.0.0.1", 00:17:46.360 "trsvcid": "39500" 00:17:46.360 }, 00:17:46.360 "auth": { 00:17:46.360 "state": "completed", 00:17:46.360 "digest": "sha256", 00:17:46.360 "dhgroup": "ffdhe6144" 00:17:46.360 } 00:17:46.360 } 00:17:46.360 ]' 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.360 01:20:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.616 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:47.176 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:47.177 01:20:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:47.740 00:17:47.740 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:47.740 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:47.740 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:47.996 { 00:17:47.996 "cntlid": 41, 00:17:47.996 "qid": 0, 00:17:47.996 "state": "enabled", 00:17:47.996 "listen_address": { 00:17:47.996 "trtype": "TCP", 00:17:47.996 "adrfam": "IPv4", 00:17:47.996 "traddr": "10.0.0.2", 00:17:47.996 "trsvcid": "4420" 00:17:47.996 }, 00:17:47.996 "peer_address": { 00:17:47.996 "trtype": "TCP", 00:17:47.996 "adrfam": "IPv4", 00:17:47.996 "traddr": "10.0.0.1", 00:17:47.996 "trsvcid": "39538" 00:17:47.996 }, 00:17:47.996 "auth": { 00:17:47.996 "state": "completed", 00:17:47.996 "digest": "sha256", 00:17:47.996 "dhgroup": "ffdhe8192" 00:17:47.996 } 00:17:47.996 } 00:17:47.996 ]' 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.996 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.253 01:20:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:48.817 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:49.075 01:20:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:49.366 00:17:49.366 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:49.366 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:49.366 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:49.623 { 00:17:49.623 "cntlid": 43, 00:17:49.623 "qid": 0, 00:17:49.623 "state": "enabled", 00:17:49.623 "listen_address": { 00:17:49.623 "trtype": "TCP", 00:17:49.623 "adrfam": "IPv4", 00:17:49.623 "traddr": "10.0.0.2", 00:17:49.623 "trsvcid": "4420" 00:17:49.623 }, 00:17:49.623 "peer_address": { 00:17:49.623 "trtype": "TCP", 00:17:49.623 "adrfam": "IPv4", 00:17:49.623 "traddr": "10.0.0.1", 00:17:49.623 "trsvcid": "39570" 00:17:49.623 }, 00:17:49.623 "auth": { 00:17:49.623 "state": "completed", 00:17:49.623 "digest": "sha256", 00:17:49.623 "dhgroup": "ffdhe8192" 00:17:49.623 } 00:17:49.623 } 00:17:49.623 ]' 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.623 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:49.879 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.879 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.879 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.879 01:20:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.442 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:50.699 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:51.262 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:51.262 { 00:17:51.262 "cntlid": 45, 00:17:51.262 "qid": 0, 00:17:51.262 "state": "enabled", 00:17:51.262 "listen_address": { 00:17:51.262 "trtype": "TCP", 00:17:51.262 "adrfam": "IPv4", 00:17:51.262 "traddr": "10.0.0.2", 00:17:51.262 "trsvcid": "4420" 00:17:51.262 }, 00:17:51.262 "peer_address": { 00:17:51.262 "trtype": "TCP", 00:17:51.262 "adrfam": "IPv4", 00:17:51.262 "traddr": "10.0.0.1", 00:17:51.262 "trsvcid": "39600" 00:17:51.262 }, 00:17:51.262 "auth": { 00:17:51.262 "state": "completed", 00:17:51.262 "digest": "sha256", 00:17:51.262 "dhgroup": "ffdhe8192" 00:17:51.262 } 00:17:51.262 } 00:17:51.262 ]' 00:17:51.262 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:51.519 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.519 01:20:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:51.519 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.519 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:51.519 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.519 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.519 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.776 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.340 01:20:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.904 00:17:52.904 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:52.904 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:52.904 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.161 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.161 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.161 01:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.161 01:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.161 01:20:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.161 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:53.161 { 00:17:53.161 "cntlid": 47, 00:17:53.161 "qid": 0, 00:17:53.161 "state": "enabled", 00:17:53.161 "listen_address": { 00:17:53.161 "trtype": "TCP", 00:17:53.161 "adrfam": "IPv4", 00:17:53.161 "traddr": "10.0.0.2", 00:17:53.161 "trsvcid": "4420" 00:17:53.161 }, 00:17:53.161 "peer_address": { 00:17:53.161 "trtype": "TCP", 00:17:53.161 "adrfam": "IPv4", 00:17:53.161 "traddr": "10.0.0.1", 00:17:53.161 "trsvcid": "33608" 00:17:53.161 }, 00:17:53.161 "auth": { 00:17:53.161 "state": "completed", 00:17:53.161 "digest": "sha256", 00:17:53.161 "dhgroup": "ffdhe8192" 00:17:53.162 } 00:17:53.162 } 00:17:53.162 ]' 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.162 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.418 01:20:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:53.982 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:54.239 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:54.239 01:20:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:54.496 { 00:17:54.496 "cntlid": 49, 00:17:54.496 "qid": 0, 00:17:54.496 "state": "enabled", 00:17:54.496 "listen_address": { 00:17:54.496 "trtype": "TCP", 00:17:54.496 "adrfam": "IPv4", 00:17:54.496 "traddr": "10.0.0.2", 00:17:54.496 "trsvcid": "4420" 00:17:54.496 }, 00:17:54.496 "peer_address": { 00:17:54.496 "trtype": "TCP", 00:17:54.496 "adrfam": "IPv4", 00:17:54.496 "traddr": "10.0.0.1", 00:17:54.496 "trsvcid": "33650" 00:17:54.496 }, 00:17:54.496 "auth": { 00:17:54.496 "state": "completed", 00:17:54.496 "digest": "sha384", 00:17:54.496 "dhgroup": "null" 00:17:54.496 } 00:17:54.496 } 00:17:54.496 ]' 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.496 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:54.753 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:54.753 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:54.753 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.753 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.753 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.753 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.317 01:20:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:55.573 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:17:55.830 00:17:55.830 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:55.830 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.830 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:56.087 { 00:17:56.087 "cntlid": 51, 00:17:56.087 "qid": 0, 00:17:56.087 "state": "enabled", 00:17:56.087 "listen_address": { 00:17:56.087 "trtype": "TCP", 00:17:56.087 "adrfam": "IPv4", 00:17:56.087 "traddr": "10.0.0.2", 00:17:56.087 "trsvcid": "4420" 00:17:56.087 }, 00:17:56.087 "peer_address": { 00:17:56.087 "trtype": "TCP", 00:17:56.087 "adrfam": "IPv4", 00:17:56.087 "traddr": "10.0.0.1", 00:17:56.087 "trsvcid": "33694" 00:17:56.087 }, 00:17:56.087 "auth": { 00:17:56.087 "state": "completed", 00:17:56.087 "digest": "sha384", 00:17:56.087 "dhgroup": "null" 00:17:56.087 } 00:17:56.087 } 00:17:56.087 ]' 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.087 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.344 01:20:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:56.908 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:57.164 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:57.422 00:17:57.422 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:57.422 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:57.422 01:20:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:57.422 { 00:17:57.422 "cntlid": 53, 00:17:57.422 "qid": 0, 00:17:57.422 "state": "enabled", 00:17:57.422 "listen_address": { 00:17:57.422 "trtype": "TCP", 00:17:57.422 "adrfam": "IPv4", 00:17:57.422 "traddr": "10.0.0.2", 00:17:57.422 "trsvcid": "4420" 00:17:57.422 }, 00:17:57.422 "peer_address": { 00:17:57.422 "trtype": "TCP", 00:17:57.422 "adrfam": "IPv4", 00:17:57.422 "traddr": "10.0.0.1", 00:17:57.422 "trsvcid": "33726" 00:17:57.422 }, 00:17:57.422 "auth": { 00:17:57.422 "state": "completed", 00:17:57.422 "digest": "sha384", 00:17:57.422 "dhgroup": "null" 00:17:57.422 } 00:17:57.422 } 00:17:57.422 ]' 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.422 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:57.679 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:57.679 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:57.679 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.679 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.679 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.936 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.499 01:20:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.499 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.500 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:58.757 00:17:58.757 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:17:58.757 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:17:58.757 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:17:59.015 { 00:17:59.015 "cntlid": 55, 00:17:59.015 "qid": 0, 00:17:59.015 "state": "enabled", 00:17:59.015 "listen_address": { 00:17:59.015 "trtype": "TCP", 00:17:59.015 "adrfam": "IPv4", 00:17:59.015 "traddr": "10.0.0.2", 00:17:59.015 "trsvcid": "4420" 00:17:59.015 }, 00:17:59.015 "peer_address": { 00:17:59.015 "trtype": "TCP", 00:17:59.015 "adrfam": "IPv4", 00:17:59.015 "traddr": "10.0.0.1", 00:17:59.015 "trsvcid": "33754" 00:17:59.015 }, 00:17:59.015 "auth": { 00:17:59.015 "state": "completed", 00:17:59.015 "digest": "sha384", 00:17:59.015 "dhgroup": "null" 00:17:59.015 } 00:17:59.015 } 00:17:59.015 ]' 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.015 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.272 01:20:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.836 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:17:59.837 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:59.837 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:00.094 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:00.351 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.351 01:20:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.351 01:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.351 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:00.351 { 00:18:00.351 "cntlid": 57, 00:18:00.351 "qid": 0, 00:18:00.351 "state": "enabled", 00:18:00.351 "listen_address": { 00:18:00.351 "trtype": "TCP", 00:18:00.351 "adrfam": "IPv4", 00:18:00.351 "traddr": "10.0.0.2", 00:18:00.351 "trsvcid": "4420" 00:18:00.351 }, 00:18:00.351 "peer_address": { 00:18:00.351 "trtype": "TCP", 00:18:00.351 "adrfam": "IPv4", 00:18:00.351 "traddr": "10.0.0.1", 00:18:00.351 "trsvcid": "33774" 00:18:00.351 }, 00:18:00.351 "auth": { 00:18:00.351 "state": "completed", 00:18:00.351 "digest": "sha384", 00:18:00.351 "dhgroup": "ffdhe2048" 00:18:00.351 } 00:18:00.351 } 00:18:00.351 ]' 00:18:00.351 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.608 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.865 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.432 01:20:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:01.432 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:01.689 00:18:01.689 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:01.689 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:01.689 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.959 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.959 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.959 01:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.959 01:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.959 01:20:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.959 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:01.959 { 00:18:01.959 "cntlid": 59, 00:18:01.959 "qid": 0, 00:18:01.959 "state": "enabled", 00:18:01.959 "listen_address": { 00:18:01.959 "trtype": "TCP", 00:18:01.959 "adrfam": "IPv4", 00:18:01.959 "traddr": "10.0.0.2", 00:18:01.959 "trsvcid": "4420" 00:18:01.959 }, 00:18:01.960 "peer_address": { 00:18:01.960 "trtype": "TCP", 00:18:01.960 "adrfam": "IPv4", 00:18:01.960 "traddr": "10.0.0.1", 00:18:01.960 "trsvcid": "33808" 00:18:01.960 }, 00:18:01.960 "auth": { 00:18:01.960 "state": "completed", 00:18:01.960 "digest": "sha384", 00:18:01.960 "dhgroup": "ffdhe2048" 00:18:01.960 } 00:18:01.960 } 00:18:01.960 ]' 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.960 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.237 01:20:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.801 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:02.802 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:03.058 00:18:03.059 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:03.059 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:03.059 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:03.316 { 00:18:03.316 "cntlid": 61, 00:18:03.316 "qid": 0, 00:18:03.316 "state": "enabled", 00:18:03.316 "listen_address": { 00:18:03.316 "trtype": "TCP", 00:18:03.316 "adrfam": "IPv4", 00:18:03.316 "traddr": "10.0.0.2", 00:18:03.316 "trsvcid": "4420" 00:18:03.316 }, 00:18:03.316 "peer_address": { 00:18:03.316 "trtype": "TCP", 00:18:03.316 "adrfam": "IPv4", 00:18:03.316 "traddr": "10.0.0.1", 00:18:03.316 "trsvcid": "38270" 00:18:03.316 }, 00:18:03.316 "auth": { 00:18:03.316 "state": "completed", 00:18:03.316 "digest": "sha384", 00:18:03.316 "dhgroup": "ffdhe2048" 00:18:03.316 } 00:18:03.316 } 00:18:03.316 ]' 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:03.316 01:20:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:03.316 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.316 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:03.573 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.573 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.573 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.573 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.138 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.396 01:20:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.654 00:18:04.654 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:04.654 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:04.654 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:04.912 { 00:18:04.912 "cntlid": 63, 00:18:04.912 "qid": 0, 00:18:04.912 "state": "enabled", 00:18:04.912 "listen_address": { 00:18:04.912 "trtype": "TCP", 00:18:04.912 "adrfam": "IPv4", 00:18:04.912 "traddr": "10.0.0.2", 00:18:04.912 "trsvcid": "4420" 00:18:04.912 }, 00:18:04.912 "peer_address": { 00:18:04.912 "trtype": "TCP", 00:18:04.912 "adrfam": "IPv4", 00:18:04.912 "traddr": "10.0.0.1", 00:18:04.912 "trsvcid": "38306" 00:18:04.912 }, 00:18:04.912 "auth": { 00:18:04.912 "state": "completed", 00:18:04.912 "digest": "sha384", 00:18:04.912 "dhgroup": "ffdhe2048" 00:18:04.912 } 00:18:04.912 } 00:18:04.912 ]' 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.912 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.170 01:20:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:05.735 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.736 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.993 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.993 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:05.993 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:05.993 00:18:05.993 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:05.993 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.993 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:06.251 { 00:18:06.251 "cntlid": 65, 00:18:06.251 "qid": 0, 00:18:06.251 "state": "enabled", 00:18:06.251 "listen_address": { 00:18:06.251 "trtype": "TCP", 00:18:06.251 "adrfam": "IPv4", 00:18:06.251 "traddr": "10.0.0.2", 00:18:06.251 "trsvcid": "4420" 00:18:06.251 }, 00:18:06.251 "peer_address": { 00:18:06.251 "trtype": "TCP", 00:18:06.251 "adrfam": "IPv4", 00:18:06.251 "traddr": "10.0.0.1", 00:18:06.251 "trsvcid": "38340" 00:18:06.251 }, 00:18:06.251 "auth": { 00:18:06.251 "state": "completed", 00:18:06.251 "digest": "sha384", 00:18:06.251 "dhgroup": "ffdhe3072" 00:18:06.251 } 00:18:06.251 } 00:18:06.251 ]' 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.251 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:06.509 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.509 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:06.509 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.509 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.509 01:20:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.510 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.075 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.333 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:18:07.333 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:07.333 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.333 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:07.333 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:07.333 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:07.334 01:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.334 01:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.334 01:20:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.334 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:07.334 01:20:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:07.591 00:18:07.591 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:07.591 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:07.591 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:07.849 { 00:18:07.849 "cntlid": 67, 00:18:07.849 "qid": 0, 00:18:07.849 "state": "enabled", 00:18:07.849 "listen_address": { 00:18:07.849 "trtype": "TCP", 00:18:07.849 "adrfam": "IPv4", 00:18:07.849 "traddr": "10.0.0.2", 00:18:07.849 "trsvcid": "4420" 00:18:07.849 }, 00:18:07.849 "peer_address": { 00:18:07.849 "trtype": "TCP", 00:18:07.849 "adrfam": "IPv4", 00:18:07.849 "traddr": "10.0.0.1", 00:18:07.849 "trsvcid": "38374" 00:18:07.849 }, 00:18:07.849 "auth": { 00:18:07.849 "state": "completed", 00:18:07.849 "digest": "sha384", 00:18:07.849 "dhgroup": "ffdhe3072" 00:18:07.849 } 00:18:07.849 } 00:18:07.849 ]' 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.849 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.107 01:20:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:08.673 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:08.931 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:09.189 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:09.189 { 00:18:09.189 "cntlid": 69, 00:18:09.189 "qid": 0, 00:18:09.189 "state": "enabled", 00:18:09.189 "listen_address": { 00:18:09.189 "trtype": "TCP", 00:18:09.189 "adrfam": "IPv4", 00:18:09.189 "traddr": "10.0.0.2", 00:18:09.189 "trsvcid": "4420" 00:18:09.189 }, 00:18:09.189 "peer_address": { 00:18:09.189 "trtype": "TCP", 00:18:09.189 "adrfam": "IPv4", 00:18:09.189 "traddr": "10.0.0.1", 00:18:09.189 "trsvcid": "38402" 00:18:09.189 }, 00:18:09.189 "auth": { 00:18:09.189 "state": "completed", 00:18:09.189 "digest": "sha384", 00:18:09.189 "dhgroup": "ffdhe3072" 00:18:09.189 } 00:18:09.189 } 00:18:09.189 ]' 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.189 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:09.447 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.447 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:09.447 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.447 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.447 01:20:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.705 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.271 01:20:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.529 00:18:10.529 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:10.529 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:10.529 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:10.787 { 00:18:10.787 "cntlid": 71, 00:18:10.787 "qid": 0, 00:18:10.787 "state": "enabled", 00:18:10.787 "listen_address": { 00:18:10.787 "trtype": "TCP", 00:18:10.787 "adrfam": "IPv4", 00:18:10.787 "traddr": "10.0.0.2", 00:18:10.787 "trsvcid": "4420" 00:18:10.787 }, 00:18:10.787 "peer_address": { 00:18:10.787 "trtype": "TCP", 00:18:10.787 "adrfam": "IPv4", 00:18:10.787 "traddr": "10.0.0.1", 00:18:10.787 "trsvcid": "38444" 00:18:10.787 }, 00:18:10.787 "auth": { 00:18:10.787 "state": "completed", 00:18:10.787 "digest": "sha384", 00:18:10.787 "dhgroup": "ffdhe3072" 00:18:10.787 } 00:18:10.787 } 00:18:10.787 ]' 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.787 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.045 01:20:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:11.610 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:11.868 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.869 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.869 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.869 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:11.869 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:12.126 00:18:12.126 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:12.126 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:12.126 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.126 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:12.384 { 00:18:12.384 "cntlid": 73, 00:18:12.384 "qid": 0, 00:18:12.384 "state": "enabled", 00:18:12.384 "listen_address": { 00:18:12.384 "trtype": "TCP", 00:18:12.384 "adrfam": "IPv4", 00:18:12.384 "traddr": "10.0.0.2", 00:18:12.384 "trsvcid": "4420" 00:18:12.384 }, 00:18:12.384 "peer_address": { 00:18:12.384 "trtype": "TCP", 00:18:12.384 "adrfam": "IPv4", 00:18:12.384 "traddr": "10.0.0.1", 00:18:12.384 "trsvcid": "38468" 00:18:12.384 }, 00:18:12.384 "auth": { 00:18:12.384 "state": "completed", 00:18:12.384 "digest": "sha384", 00:18:12.384 "dhgroup": "ffdhe4096" 00:18:12.384 } 00:18:12.384 } 00:18:12.384 ]' 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.384 01:20:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.641 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:13.207 01:20:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:13.466 00:18:13.466 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:13.466 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:13.466 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:13.724 { 00:18:13.724 "cntlid": 75, 00:18:13.724 "qid": 0, 00:18:13.724 "state": "enabled", 00:18:13.724 "listen_address": { 00:18:13.724 "trtype": "TCP", 00:18:13.724 "adrfam": "IPv4", 00:18:13.724 "traddr": "10.0.0.2", 00:18:13.724 "trsvcid": "4420" 00:18:13.724 }, 00:18:13.724 "peer_address": { 00:18:13.724 "trtype": "TCP", 00:18:13.724 "adrfam": "IPv4", 00:18:13.724 "traddr": "10.0.0.1", 00:18:13.724 "trsvcid": "40824" 00:18:13.724 }, 00:18:13.724 "auth": { 00:18:13.724 "state": "completed", 00:18:13.724 "digest": "sha384", 00:18:13.724 "dhgroup": "ffdhe4096" 00:18:13.724 } 00:18:13.724 } 00:18:13.724 ]' 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.724 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:13.982 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.982 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:13.982 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.982 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.982 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.982 01:20:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.549 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:14.806 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.096 00:18:15.096 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:15.096 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:15.096 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:15.354 { 00:18:15.354 "cntlid": 77, 00:18:15.354 "qid": 0, 00:18:15.354 "state": "enabled", 00:18:15.354 "listen_address": { 00:18:15.354 "trtype": "TCP", 00:18:15.354 "adrfam": "IPv4", 00:18:15.354 "traddr": "10.0.0.2", 00:18:15.354 "trsvcid": "4420" 00:18:15.354 }, 00:18:15.354 "peer_address": { 00:18:15.354 "trtype": "TCP", 00:18:15.354 "adrfam": "IPv4", 00:18:15.354 "traddr": "10.0.0.1", 00:18:15.354 "trsvcid": "40840" 00:18:15.354 }, 00:18:15.354 "auth": { 00:18:15.354 "state": "completed", 00:18:15.354 "digest": "sha384", 00:18:15.354 "dhgroup": "ffdhe4096" 00:18:15.354 } 00:18:15.354 } 00:18:15.354 ]' 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.354 01:20:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:15.354 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.354 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.354 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.612 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.178 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.436 01:20:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.693 00:18:16.693 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:16.693 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:16.693 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.693 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.693 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.693 01:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:16.951 { 00:18:16.951 "cntlid": 79, 00:18:16.951 "qid": 0, 00:18:16.951 "state": "enabled", 00:18:16.951 "listen_address": { 00:18:16.951 "trtype": "TCP", 00:18:16.951 "adrfam": "IPv4", 00:18:16.951 "traddr": "10.0.0.2", 00:18:16.951 "trsvcid": "4420" 00:18:16.951 }, 00:18:16.951 "peer_address": { 00:18:16.951 "trtype": "TCP", 00:18:16.951 "adrfam": "IPv4", 00:18:16.951 "traddr": "10.0.0.1", 00:18:16.951 "trsvcid": "40870" 00:18:16.951 }, 00:18:16.951 "auth": { 00:18:16.951 "state": "completed", 00:18:16.951 "digest": "sha384", 00:18:16.951 "dhgroup": "ffdhe4096" 00:18:16.951 } 00:18:16.951 } 00:18:16.951 ]' 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.951 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.208 01:20:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:17.774 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:18.032 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:18.290 { 00:18:18.290 "cntlid": 81, 00:18:18.290 "qid": 0, 00:18:18.290 "state": "enabled", 00:18:18.290 "listen_address": { 00:18:18.290 "trtype": "TCP", 00:18:18.290 "adrfam": "IPv4", 00:18:18.290 "traddr": "10.0.0.2", 00:18:18.290 "trsvcid": "4420" 00:18:18.290 }, 00:18:18.290 "peer_address": { 00:18:18.290 "trtype": "TCP", 00:18:18.290 "adrfam": "IPv4", 00:18:18.290 "traddr": "10.0.0.1", 00:18:18.290 "trsvcid": "40910" 00:18:18.290 }, 00:18:18.290 "auth": { 00:18:18.290 "state": "completed", 00:18:18.290 "digest": "sha384", 00:18:18.290 "dhgroup": "ffdhe6144" 00:18:18.290 } 00:18:18.290 } 00:18:18.290 ]' 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.290 01:20:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:18.548 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.548 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:18.548 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.548 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.548 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.548 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:19.114 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:19.372 01:20:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:19.636 00:18:19.636 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:19.636 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:19.636 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.896 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.896 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.896 01:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.896 01:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.896 01:20:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.896 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:19.896 { 00:18:19.896 "cntlid": 83, 00:18:19.896 "qid": 0, 00:18:19.896 "state": "enabled", 00:18:19.896 "listen_address": { 00:18:19.896 "trtype": "TCP", 00:18:19.896 "adrfam": "IPv4", 00:18:19.896 "traddr": "10.0.0.2", 00:18:19.896 "trsvcid": "4420" 00:18:19.896 }, 00:18:19.896 "peer_address": { 00:18:19.896 "trtype": "TCP", 00:18:19.897 "adrfam": "IPv4", 00:18:19.897 "traddr": "10.0.0.1", 00:18:19.897 "trsvcid": "40946" 00:18:19.897 }, 00:18:19.897 "auth": { 00:18:19.897 "state": "completed", 00:18:19.897 "digest": "sha384", 00:18:19.897 "dhgroup": "ffdhe6144" 00:18:19.897 } 00:18:19.897 } 00:18:19.897 ]' 00:18:19.897 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:19.897 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.897 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:19.897 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.897 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:20.155 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.155 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.155 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.155 01:20:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.721 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:20.980 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:21.238 00:18:21.238 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:21.238 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.238 01:20:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:21.496 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:21.497 { 00:18:21.497 "cntlid": 85, 00:18:21.497 "qid": 0, 00:18:21.497 "state": "enabled", 00:18:21.497 "listen_address": { 00:18:21.497 "trtype": "TCP", 00:18:21.497 "adrfam": "IPv4", 00:18:21.497 "traddr": "10.0.0.2", 00:18:21.497 "trsvcid": "4420" 00:18:21.497 }, 00:18:21.497 "peer_address": { 00:18:21.497 "trtype": "TCP", 00:18:21.497 "adrfam": "IPv4", 00:18:21.497 "traddr": "10.0.0.1", 00:18:21.497 "trsvcid": "40978" 00:18:21.497 }, 00:18:21.497 "auth": { 00:18:21.497 "state": "completed", 00:18:21.497 "digest": "sha384", 00:18:21.497 "dhgroup": "ffdhe6144" 00:18:21.497 } 00:18:21.497 } 00:18:21.497 ]' 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.497 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:21.754 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.754 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.754 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.754 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.320 01:20:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.579 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.837 00:18:22.837 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:22.837 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.837 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:23.095 { 00:18:23.095 "cntlid": 87, 00:18:23.095 "qid": 0, 00:18:23.095 "state": "enabled", 00:18:23.095 "listen_address": { 00:18:23.095 "trtype": "TCP", 00:18:23.095 "adrfam": "IPv4", 00:18:23.095 "traddr": "10.0.0.2", 00:18:23.095 "trsvcid": "4420" 00:18:23.095 }, 00:18:23.095 "peer_address": { 00:18:23.095 "trtype": "TCP", 00:18:23.095 "adrfam": "IPv4", 00:18:23.095 "traddr": "10.0.0.1", 00:18:23.095 "trsvcid": "46154" 00:18:23.095 }, 00:18:23.095 "auth": { 00:18:23.095 "state": "completed", 00:18:23.095 "digest": "sha384", 00:18:23.095 "dhgroup": "ffdhe6144" 00:18:23.095 } 00:18:23.095 } 00:18:23.095 ]' 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.095 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:23.354 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.354 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.354 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.354 01:20:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:23.921 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.179 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.180 01:20:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.745 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:24.745 { 00:18:24.745 "cntlid": 89, 00:18:24.745 "qid": 0, 00:18:24.745 "state": "enabled", 00:18:24.745 "listen_address": { 00:18:24.745 "trtype": "TCP", 00:18:24.745 "adrfam": "IPv4", 00:18:24.745 "traddr": "10.0.0.2", 00:18:24.745 "trsvcid": "4420" 00:18:24.745 }, 00:18:24.745 "peer_address": { 00:18:24.745 "trtype": "TCP", 00:18:24.745 "adrfam": "IPv4", 00:18:24.745 "traddr": "10.0.0.1", 00:18:24.745 "trsvcid": "46186" 00:18:24.745 }, 00:18:24.745 "auth": { 00:18:24.745 "state": "completed", 00:18:24.745 "digest": "sha384", 00:18:24.745 "dhgroup": "ffdhe8192" 00:18:24.745 } 00:18:24.745 } 00:18:24.745 ]' 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.745 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:25.002 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.002 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:25.002 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.002 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.002 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.002 01:21:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:25.568 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:25.826 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:26.392 00:18:26.392 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:26.392 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:26.392 01:21:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:26.392 { 00:18:26.392 "cntlid": 91, 00:18:26.392 "qid": 0, 00:18:26.392 "state": "enabled", 00:18:26.392 "listen_address": { 00:18:26.392 "trtype": "TCP", 00:18:26.392 "adrfam": "IPv4", 00:18:26.392 "traddr": "10.0.0.2", 00:18:26.392 "trsvcid": "4420" 00:18:26.392 }, 00:18:26.392 "peer_address": { 00:18:26.392 "trtype": "TCP", 00:18:26.392 "adrfam": "IPv4", 00:18:26.392 "traddr": "10.0.0.1", 00:18:26.392 "trsvcid": "46218" 00:18:26.392 }, 00:18:26.392 "auth": { 00:18:26.392 "state": "completed", 00:18:26.392 "digest": "sha384", 00:18:26.392 "dhgroup": "ffdhe8192" 00:18:26.392 } 00:18:26.392 } 00:18:26.392 ]' 00:18:26.392 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.651 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.909 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:27.167 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.425 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:27.425 01:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.425 01:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.425 01:21:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.425 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:27.425 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.426 01:21:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:27.426 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:28.000 00:18:28.000 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:28.000 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:28.000 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:28.278 { 00:18:28.278 "cntlid": 93, 00:18:28.278 "qid": 0, 00:18:28.278 "state": "enabled", 00:18:28.278 "listen_address": { 00:18:28.278 "trtype": "TCP", 00:18:28.278 "adrfam": "IPv4", 00:18:28.278 "traddr": "10.0.0.2", 00:18:28.278 "trsvcid": "4420" 00:18:28.278 }, 00:18:28.278 "peer_address": { 00:18:28.278 "trtype": "TCP", 00:18:28.278 "adrfam": "IPv4", 00:18:28.278 "traddr": "10.0.0.1", 00:18:28.278 "trsvcid": "46258" 00:18:28.278 }, 00:18:28.278 "auth": { 00:18:28.278 "state": "completed", 00:18:28.278 "digest": "sha384", 00:18:28.278 "dhgroup": "ffdhe8192" 00:18:28.278 } 00:18:28.278 } 00:18:28.278 ]' 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.278 01:21:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.536 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.103 01:21:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:29.670 00:18:29.670 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:29.670 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:29.670 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:29.928 { 00:18:29.928 "cntlid": 95, 00:18:29.928 "qid": 0, 00:18:29.928 "state": "enabled", 00:18:29.928 "listen_address": { 00:18:29.928 "trtype": "TCP", 00:18:29.928 "adrfam": "IPv4", 00:18:29.928 "traddr": "10.0.0.2", 00:18:29.928 "trsvcid": "4420" 00:18:29.928 }, 00:18:29.928 "peer_address": { 00:18:29.928 "trtype": "TCP", 00:18:29.928 "adrfam": "IPv4", 00:18:29.928 "traddr": "10.0.0.1", 00:18:29.928 "trsvcid": "46286" 00:18:29.928 }, 00:18:29.928 "auth": { 00:18:29.928 "state": "completed", 00:18:29.928 "digest": "sha384", 00:18:29.928 "dhgroup": "ffdhe8192" 00:18:29.928 } 00:18:29.928 } 00:18:29.928 ]' 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.928 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.185 01:21:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:30.751 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:31.010 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:31.010 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:31.269 { 00:18:31.269 "cntlid": 97, 00:18:31.269 "qid": 0, 00:18:31.269 "state": "enabled", 00:18:31.269 "listen_address": { 00:18:31.269 "trtype": "TCP", 00:18:31.269 "adrfam": "IPv4", 00:18:31.269 "traddr": "10.0.0.2", 00:18:31.269 "trsvcid": "4420" 00:18:31.269 }, 00:18:31.269 "peer_address": { 00:18:31.269 "trtype": "TCP", 00:18:31.269 "adrfam": "IPv4", 00:18:31.269 "traddr": "10.0.0.1", 00:18:31.269 "trsvcid": "46304" 00:18:31.269 }, 00:18:31.269 "auth": { 00:18:31.269 "state": "completed", 00:18:31.269 "digest": "sha512", 00:18:31.269 "dhgroup": "null" 00:18:31.269 } 00:18:31.269 } 00:18:31.269 ]' 00:18:31.269 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:31.527 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.527 01:21:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:31.527 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:31.527 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:31.527 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.527 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.527 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.784 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:32.350 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.350 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:32.351 01:21:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:32.609 00:18:32.609 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:32.609 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:32.609 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.867 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.867 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.867 01:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.867 01:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.867 01:21:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.867 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:32.867 { 00:18:32.867 "cntlid": 99, 00:18:32.867 "qid": 0, 00:18:32.867 "state": "enabled", 00:18:32.867 "listen_address": { 00:18:32.867 "trtype": "TCP", 00:18:32.867 "adrfam": "IPv4", 00:18:32.867 "traddr": "10.0.0.2", 00:18:32.867 "trsvcid": "4420" 00:18:32.867 }, 00:18:32.867 "peer_address": { 00:18:32.867 "trtype": "TCP", 00:18:32.867 "adrfam": "IPv4", 00:18:32.867 "traddr": "10.0.0.1", 00:18:32.867 "trsvcid": "43928" 00:18:32.867 }, 00:18:32.868 "auth": { 00:18:32.868 "state": "completed", 00:18:32.868 "digest": "sha512", 00:18:32.868 "dhgroup": "null" 00:18:32.868 } 00:18:32.868 } 00:18:32.868 ]' 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.868 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.125 01:21:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:33.689 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.689 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:33.690 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.690 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.690 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.690 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:33.690 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:33.690 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:33.947 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:34.205 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:34.205 { 00:18:34.205 "cntlid": 101, 00:18:34.205 "qid": 0, 00:18:34.205 "state": "enabled", 00:18:34.205 "listen_address": { 00:18:34.205 "trtype": "TCP", 00:18:34.205 "adrfam": "IPv4", 00:18:34.205 "traddr": "10.0.0.2", 00:18:34.205 "trsvcid": "4420" 00:18:34.205 }, 00:18:34.205 "peer_address": { 00:18:34.205 "trtype": "TCP", 00:18:34.205 "adrfam": "IPv4", 00:18:34.205 "traddr": "10.0.0.1", 00:18:34.205 "trsvcid": "43952" 00:18:34.205 }, 00:18:34.205 "auth": { 00:18:34.205 "state": "completed", 00:18:34.205 "digest": "sha512", 00:18:34.205 "dhgroup": "null" 00:18:34.205 } 00:18:34.205 } 00:18:34.205 ]' 00:18:34.205 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:34.463 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.463 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:34.463 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:34.463 01:21:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:34.464 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.464 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.464 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.721 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.287 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.288 01:21:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.545 00:18:35.545 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:35.545 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:35.545 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:35.804 { 00:18:35.804 "cntlid": 103, 00:18:35.804 "qid": 0, 00:18:35.804 "state": "enabled", 00:18:35.804 "listen_address": { 00:18:35.804 "trtype": "TCP", 00:18:35.804 "adrfam": "IPv4", 00:18:35.804 "traddr": "10.0.0.2", 00:18:35.804 "trsvcid": "4420" 00:18:35.804 }, 00:18:35.804 "peer_address": { 00:18:35.804 "trtype": "TCP", 00:18:35.804 "adrfam": "IPv4", 00:18:35.804 "traddr": "10.0.0.1", 00:18:35.804 "trsvcid": "43964" 00:18:35.804 }, 00:18:35.804 "auth": { 00:18:35.804 "state": "completed", 00:18:35.804 "digest": "sha512", 00:18:35.804 "dhgroup": "null" 00:18:35.804 } 00:18:35.804 } 00:18:35.804 ]' 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.804 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.062 01:21:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:36.627 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:36.885 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:37.142 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:37.142 { 00:18:37.142 "cntlid": 105, 00:18:37.142 "qid": 0, 00:18:37.142 "state": "enabled", 00:18:37.142 "listen_address": { 00:18:37.142 "trtype": "TCP", 00:18:37.142 "adrfam": "IPv4", 00:18:37.142 "traddr": "10.0.0.2", 00:18:37.142 "trsvcid": "4420" 00:18:37.142 }, 00:18:37.142 "peer_address": { 00:18:37.142 "trtype": "TCP", 00:18:37.142 "adrfam": "IPv4", 00:18:37.142 "traddr": "10.0.0.1", 00:18:37.142 "trsvcid": "43994" 00:18:37.142 }, 00:18:37.142 "auth": { 00:18:37.142 "state": "completed", 00:18:37.142 "digest": "sha512", 00:18:37.142 "dhgroup": "ffdhe2048" 00:18:37.142 } 00:18:37.142 } 00:18:37.142 ]' 00:18:37.142 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.400 01:21:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.658 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:38.223 01:21:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:38.480 00:18:38.480 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:38.480 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.480 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:38.739 { 00:18:38.739 "cntlid": 107, 00:18:38.739 "qid": 0, 00:18:38.739 "state": "enabled", 00:18:38.739 "listen_address": { 00:18:38.739 "trtype": "TCP", 00:18:38.739 "adrfam": "IPv4", 00:18:38.739 "traddr": "10.0.0.2", 00:18:38.739 "trsvcid": "4420" 00:18:38.739 }, 00:18:38.739 "peer_address": { 00:18:38.739 "trtype": "TCP", 00:18:38.739 "adrfam": "IPv4", 00:18:38.739 "traddr": "10.0.0.1", 00:18:38.739 "trsvcid": "44032" 00:18:38.739 }, 00:18:38.739 "auth": { 00:18:38.739 "state": "completed", 00:18:38.739 "digest": "sha512", 00:18:38.739 "dhgroup": "ffdhe2048" 00:18:38.739 } 00:18:38.739 } 00:18:38.739 ]' 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.739 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.997 01:21:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:39.563 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.821 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.822 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:39.822 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:40.080 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:40.080 { 00:18:40.080 "cntlid": 109, 00:18:40.080 "qid": 0, 00:18:40.080 "state": "enabled", 00:18:40.080 "listen_address": { 00:18:40.080 "trtype": "TCP", 00:18:40.080 "adrfam": "IPv4", 00:18:40.080 "traddr": "10.0.0.2", 00:18:40.080 "trsvcid": "4420" 00:18:40.080 }, 00:18:40.080 "peer_address": { 00:18:40.080 "trtype": "TCP", 00:18:40.080 "adrfam": "IPv4", 00:18:40.080 "traddr": "10.0.0.1", 00:18:40.080 "trsvcid": "44050" 00:18:40.080 }, 00:18:40.080 "auth": { 00:18:40.080 "state": "completed", 00:18:40.080 "digest": "sha512", 00:18:40.080 "dhgroup": "ffdhe2048" 00:18:40.080 } 00:18:40.080 } 00:18:40.080 ]' 00:18:40.080 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.338 01:21:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.595 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:40.882 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.140 01:21:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:41.399 00:18:41.399 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:41.399 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:41.399 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:41.658 { 00:18:41.658 "cntlid": 111, 00:18:41.658 "qid": 0, 00:18:41.658 "state": "enabled", 00:18:41.658 "listen_address": { 00:18:41.658 "trtype": "TCP", 00:18:41.658 "adrfam": "IPv4", 00:18:41.658 "traddr": "10.0.0.2", 00:18:41.658 "trsvcid": "4420" 00:18:41.658 }, 00:18:41.658 "peer_address": { 00:18:41.658 "trtype": "TCP", 00:18:41.658 "adrfam": "IPv4", 00:18:41.658 "traddr": "10.0.0.1", 00:18:41.658 "trsvcid": "44072" 00:18:41.658 }, 00:18:41.658 "auth": { 00:18:41.658 "state": "completed", 00:18:41.658 "digest": "sha512", 00:18:41.658 "dhgroup": "ffdhe2048" 00:18:41.658 } 00:18:41.658 } 00:18:41.658 ]' 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.658 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.916 01:21:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.482 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:42.741 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:42.999 00:18:42.999 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:42.999 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:42.999 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.999 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:43.258 { 00:18:43.258 "cntlid": 113, 00:18:43.258 "qid": 0, 00:18:43.258 "state": "enabled", 00:18:43.258 "listen_address": { 00:18:43.258 "trtype": "TCP", 00:18:43.258 "adrfam": "IPv4", 00:18:43.258 "traddr": "10.0.0.2", 00:18:43.258 "trsvcid": "4420" 00:18:43.258 }, 00:18:43.258 "peer_address": { 00:18:43.258 "trtype": "TCP", 00:18:43.258 "adrfam": "IPv4", 00:18:43.258 "traddr": "10.0.0.1", 00:18:43.258 "trsvcid": "37634" 00:18:43.258 }, 00:18:43.258 "auth": { 00:18:43.258 "state": "completed", 00:18:43.258 "digest": "sha512", 00:18:43.258 "dhgroup": "ffdhe3072" 00:18:43.258 } 00:18:43.258 } 00:18:43.258 ]' 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.258 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.516 01:21:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.093 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.094 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:44.094 01:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.094 01:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.094 01:21:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.094 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:44.094 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:44.352 00:18:44.352 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:44.352 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:44.352 01:21:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:44.610 { 00:18:44.610 "cntlid": 115, 00:18:44.610 "qid": 0, 00:18:44.610 "state": "enabled", 00:18:44.610 "listen_address": { 00:18:44.610 "trtype": "TCP", 00:18:44.610 "adrfam": "IPv4", 00:18:44.610 "traddr": "10.0.0.2", 00:18:44.610 "trsvcid": "4420" 00:18:44.610 }, 00:18:44.610 "peer_address": { 00:18:44.610 "trtype": "TCP", 00:18:44.610 "adrfam": "IPv4", 00:18:44.610 "traddr": "10.0.0.1", 00:18:44.610 "trsvcid": "37650" 00:18:44.610 }, 00:18:44.610 "auth": { 00:18:44.610 "state": "completed", 00:18:44.610 "digest": "sha512", 00:18:44.610 "dhgroup": "ffdhe3072" 00:18:44.610 } 00:18:44.610 } 00:18:44.610 ]' 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.610 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.868 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:45.435 01:21:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:45.435 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:45.694 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:45.952 00:18:45.952 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:45.952 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:45.952 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.952 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.952 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.952 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.953 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.953 01:21:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.953 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:45.953 { 00:18:45.953 "cntlid": 117, 00:18:45.953 "qid": 0, 00:18:45.953 "state": "enabled", 00:18:45.953 "listen_address": { 00:18:45.953 "trtype": "TCP", 00:18:45.953 "adrfam": "IPv4", 00:18:45.953 "traddr": "10.0.0.2", 00:18:45.953 "trsvcid": "4420" 00:18:45.953 }, 00:18:45.953 "peer_address": { 00:18:45.953 "trtype": "TCP", 00:18:45.953 "adrfam": "IPv4", 00:18:45.953 "traddr": "10.0.0.1", 00:18:45.953 "trsvcid": "37682" 00:18:45.953 }, 00:18:45.953 "auth": { 00:18:45.953 "state": "completed", 00:18:45.953 "digest": "sha512", 00:18:45.953 "dhgroup": "ffdhe3072" 00:18:45.953 } 00:18:45.953 } 00:18:45.953 ]' 00:18:45.953 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.210 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.469 01:21:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.035 01:21:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.036 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.036 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.294 00:18:47.295 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:47.295 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:47.295 01:21:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:47.555 { 00:18:47.555 "cntlid": 119, 00:18:47.555 "qid": 0, 00:18:47.555 "state": "enabled", 00:18:47.555 "listen_address": { 00:18:47.555 "trtype": "TCP", 00:18:47.555 "adrfam": "IPv4", 00:18:47.555 "traddr": "10.0.0.2", 00:18:47.555 "trsvcid": "4420" 00:18:47.555 }, 00:18:47.555 "peer_address": { 00:18:47.555 "trtype": "TCP", 00:18:47.555 "adrfam": "IPv4", 00:18:47.555 "traddr": "10.0.0.1", 00:18:47.555 "trsvcid": "37716" 00:18:47.555 }, 00:18:47.555 "auth": { 00:18:47.555 "state": "completed", 00:18:47.555 "digest": "sha512", 00:18:47.555 "dhgroup": "ffdhe3072" 00:18:47.555 } 00:18:47.555 } 00:18:47.555 ]' 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.555 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.814 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.380 01:21:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:48.638 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:48.896 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.896 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:48.896 { 00:18:48.897 "cntlid": 121, 00:18:48.897 "qid": 0, 00:18:48.897 "state": "enabled", 00:18:48.897 "listen_address": { 00:18:48.897 "trtype": "TCP", 00:18:48.897 "adrfam": "IPv4", 00:18:48.897 "traddr": "10.0.0.2", 00:18:48.897 "trsvcid": "4420" 00:18:48.897 }, 00:18:48.897 "peer_address": { 00:18:48.897 "trtype": "TCP", 00:18:48.897 "adrfam": "IPv4", 00:18:48.897 "traddr": "10.0.0.1", 00:18:48.897 "trsvcid": "37732" 00:18:48.897 }, 00:18:48.897 "auth": { 00:18:48.897 "state": "completed", 00:18:48.897 "digest": "sha512", 00:18:48.897 "dhgroup": "ffdhe4096" 00:18:48.897 } 00:18:48.897 } 00:18:48.897 ]' 00:18:48.897 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.155 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.413 01:21:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:49.978 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:50.236 00:18:50.236 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:50.236 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:50.236 01:21:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:50.493 { 00:18:50.493 "cntlid": 123, 00:18:50.493 "qid": 0, 00:18:50.493 "state": "enabled", 00:18:50.493 "listen_address": { 00:18:50.493 "trtype": "TCP", 00:18:50.493 "adrfam": "IPv4", 00:18:50.493 "traddr": "10.0.0.2", 00:18:50.493 "trsvcid": "4420" 00:18:50.493 }, 00:18:50.493 "peer_address": { 00:18:50.493 "trtype": "TCP", 00:18:50.493 "adrfam": "IPv4", 00:18:50.493 "traddr": "10.0.0.1", 00:18:50.493 "trsvcid": "37756" 00:18:50.493 }, 00:18:50.493 "auth": { 00:18:50.493 "state": "completed", 00:18:50.493 "digest": "sha512", 00:18:50.493 "dhgroup": "ffdhe4096" 00:18:50.493 } 00:18:50.493 } 00:18:50.493 ]' 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.493 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:50.749 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.749 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.749 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.749 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.315 01:21:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.573 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:51.831 00:18:51.831 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:51.831 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:51.831 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:52.091 { 00:18:52.091 "cntlid": 125, 00:18:52.091 "qid": 0, 00:18:52.091 "state": "enabled", 00:18:52.091 "listen_address": { 00:18:52.091 "trtype": "TCP", 00:18:52.091 "adrfam": "IPv4", 00:18:52.091 "traddr": "10.0.0.2", 00:18:52.091 "trsvcid": "4420" 00:18:52.091 }, 00:18:52.091 "peer_address": { 00:18:52.091 "trtype": "TCP", 00:18:52.091 "adrfam": "IPv4", 00:18:52.091 "traddr": "10.0.0.1", 00:18:52.091 "trsvcid": "37792" 00:18:52.091 }, 00:18:52.091 "auth": { 00:18:52.091 "state": "completed", 00:18:52.091 "digest": "sha512", 00:18:52.091 "dhgroup": "ffdhe4096" 00:18:52.091 } 00:18:52.091 } 00:18:52.091 ]' 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.091 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.349 01:21:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:52.693 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.693 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.693 01:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.693 01:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.951 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.210 00:18:53.210 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:53.210 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:53.210 01:21:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:53.468 { 00:18:53.468 "cntlid": 127, 00:18:53.468 "qid": 0, 00:18:53.468 "state": "enabled", 00:18:53.468 "listen_address": { 00:18:53.468 "trtype": "TCP", 00:18:53.468 "adrfam": "IPv4", 00:18:53.468 "traddr": "10.0.0.2", 00:18:53.468 "trsvcid": "4420" 00:18:53.468 }, 00:18:53.468 "peer_address": { 00:18:53.468 "trtype": "TCP", 00:18:53.468 "adrfam": "IPv4", 00:18:53.468 "traddr": "10.0.0.1", 00:18:53.468 "trsvcid": "37388" 00:18:53.468 }, 00:18:53.468 "auth": { 00:18:53.468 "state": "completed", 00:18:53.468 "digest": "sha512", 00:18:53.468 "dhgroup": "ffdhe4096" 00:18:53.468 } 00:18:53.468 } 00:18:53.468 ]' 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.468 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:53.726 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.726 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.726 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.727 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.294 01:21:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:54.552 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:54.810 00:18:54.810 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:54.810 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:54.810 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.068 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.068 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.068 01:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:55.069 { 00:18:55.069 "cntlid": 129, 00:18:55.069 "qid": 0, 00:18:55.069 "state": "enabled", 00:18:55.069 "listen_address": { 00:18:55.069 "trtype": "TCP", 00:18:55.069 "adrfam": "IPv4", 00:18:55.069 "traddr": "10.0.0.2", 00:18:55.069 "trsvcid": "4420" 00:18:55.069 }, 00:18:55.069 "peer_address": { 00:18:55.069 "trtype": "TCP", 00:18:55.069 "adrfam": "IPv4", 00:18:55.069 "traddr": "10.0.0.1", 00:18:55.069 "trsvcid": "37420" 00:18:55.069 }, 00:18:55.069 "auth": { 00:18:55.069 "state": "completed", 00:18:55.069 "digest": "sha512", 00:18:55.069 "dhgroup": "ffdhe6144" 00:18:55.069 } 00:18:55.069 } 00:18:55.069 ]' 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.069 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.326 01:21:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.893 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:56.152 01:21:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:18:56.410 00:18:56.410 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:56.410 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.410 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:56.668 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.668 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.668 01:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.668 01:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.668 01:21:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.668 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:56.668 { 00:18:56.668 "cntlid": 131, 00:18:56.668 "qid": 0, 00:18:56.668 "state": "enabled", 00:18:56.668 "listen_address": { 00:18:56.668 "trtype": "TCP", 00:18:56.669 "adrfam": "IPv4", 00:18:56.669 "traddr": "10.0.0.2", 00:18:56.669 "trsvcid": "4420" 00:18:56.669 }, 00:18:56.669 "peer_address": { 00:18:56.669 "trtype": "TCP", 00:18:56.669 "adrfam": "IPv4", 00:18:56.669 "traddr": "10.0.0.1", 00:18:56.669 "trsvcid": "37440" 00:18:56.669 }, 00:18:56.669 "auth": { 00:18:56.669 "state": "completed", 00:18:56.669 "digest": "sha512", 00:18:56.669 "dhgroup": "ffdhe6144" 00:18:56.669 } 00:18:56.669 } 00:18:56.669 ]' 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.669 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.926 01:21:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.493 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:57.751 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:58.009 00:18:58.009 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:58.009 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:58.009 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.266 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:58.267 { 00:18:58.267 "cntlid": 133, 00:18:58.267 "qid": 0, 00:18:58.267 "state": "enabled", 00:18:58.267 "listen_address": { 00:18:58.267 "trtype": "TCP", 00:18:58.267 "adrfam": "IPv4", 00:18:58.267 "traddr": "10.0.0.2", 00:18:58.267 "trsvcid": "4420" 00:18:58.267 }, 00:18:58.267 "peer_address": { 00:18:58.267 "trtype": "TCP", 00:18:58.267 "adrfam": "IPv4", 00:18:58.267 "traddr": "10.0.0.1", 00:18:58.267 "trsvcid": "37464" 00:18:58.267 }, 00:18:58.267 "auth": { 00:18:58.267 "state": "completed", 00:18:58.267 "digest": "sha512", 00:18:58.267 "dhgroup": "ffdhe6144" 00:18:58.267 } 00:18:58.267 } 00:18:58.267 ]' 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.267 01:21:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.525 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:18:59.091 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.349 01:21:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.607 00:18:59.607 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:18:59.607 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:18:59.607 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:18:59.865 { 00:18:59.865 "cntlid": 135, 00:18:59.865 "qid": 0, 00:18:59.865 "state": "enabled", 00:18:59.865 "listen_address": { 00:18:59.865 "trtype": "TCP", 00:18:59.865 "adrfam": "IPv4", 00:18:59.865 "traddr": "10.0.0.2", 00:18:59.865 "trsvcid": "4420" 00:18:59.865 }, 00:18:59.865 "peer_address": { 00:18:59.865 "trtype": "TCP", 00:18:59.865 "adrfam": "IPv4", 00:18:59.865 "traddr": "10.0.0.1", 00:18:59.865 "trsvcid": "37502" 00:18:59.865 }, 00:18:59.865 "auth": { 00:18:59.865 "state": "completed", 00:18:59.865 "digest": "sha512", 00:18:59.865 "dhgroup": "ffdhe6144" 00:18:59.865 } 00:18:59.865 } 00:18:59.865 ]' 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.865 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.123 01:21:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:19:00.690 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.690 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:00.690 01:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.690 01:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.690 01:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.691 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.691 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:00.691 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.691 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:00.949 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:01.207 00:19:01.207 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:01.207 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:01.207 01:21:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:01.465 { 00:19:01.465 "cntlid": 137, 00:19:01.465 "qid": 0, 00:19:01.465 "state": "enabled", 00:19:01.465 "listen_address": { 00:19:01.465 "trtype": "TCP", 00:19:01.465 "adrfam": "IPv4", 00:19:01.465 "traddr": "10.0.0.2", 00:19:01.465 "trsvcid": "4420" 00:19:01.465 }, 00:19:01.465 "peer_address": { 00:19:01.465 "trtype": "TCP", 00:19:01.465 "adrfam": "IPv4", 00:19:01.465 "traddr": "10.0.0.1", 00:19:01.465 "trsvcid": "37514" 00:19:01.465 }, 00:19:01.465 "auth": { 00:19:01.465 "state": "completed", 00:19:01.465 "digest": "sha512", 00:19:01.465 "dhgroup": "ffdhe8192" 00:19:01.465 } 00:19:01.465 } 00:19:01.465 ]' 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.465 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:01.724 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.724 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.724 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.724 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.290 01:21:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.548 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:19:02.548 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:02.549 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:03.116 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:03.116 { 00:19:03.116 "cntlid": 139, 00:19:03.116 "qid": 0, 00:19:03.116 "state": "enabled", 00:19:03.116 "listen_address": { 00:19:03.116 "trtype": "TCP", 00:19:03.116 "adrfam": "IPv4", 00:19:03.116 "traddr": "10.0.0.2", 00:19:03.116 "trsvcid": "4420" 00:19:03.116 }, 00:19:03.116 "peer_address": { 00:19:03.116 "trtype": "TCP", 00:19:03.116 "adrfam": "IPv4", 00:19:03.116 "traddr": "10.0.0.1", 00:19:03.116 "trsvcid": "45630" 00:19:03.116 }, 00:19:03.116 "auth": { 00:19:03.116 "state": "completed", 00:19:03.116 "digest": "sha512", 00:19:03.116 "dhgroup": "ffdhe8192" 00:19:03.116 } 00:19:03.116 } 00:19:03.116 ]' 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.116 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:03.374 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.374 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:03.374 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.374 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.374 01:21:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.375 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NWYzZTEzZGRiMWEzZTE5OWQ2Y2M5YTk4NGYxNDA5MTBN+blz: 00:19:03.942 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:03.943 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.201 01:21:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:04.768 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.768 01:21:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:05.026 { 00:19:05.026 "cntlid": 141, 00:19:05.026 "qid": 0, 00:19:05.026 "state": "enabled", 00:19:05.026 "listen_address": { 00:19:05.026 "trtype": "TCP", 00:19:05.026 "adrfam": "IPv4", 00:19:05.026 "traddr": "10.0.0.2", 00:19:05.026 "trsvcid": "4420" 00:19:05.026 }, 00:19:05.026 "peer_address": { 00:19:05.026 "trtype": "TCP", 00:19:05.026 "adrfam": "IPv4", 00:19:05.026 "traddr": "10.0.0.1", 00:19:05.026 "trsvcid": "45666" 00:19:05.026 }, 00:19:05.026 "auth": { 00:19:05.026 "state": "completed", 00:19:05.026 "digest": "sha512", 00:19:05.026 "dhgroup": "ffdhe8192" 00:19:05.026 } 00:19:05.026 } 00:19:05.026 ]' 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.026 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.284 01:21:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:OTYyNjVlYWYwYjNhNDYyNzU0NWEwZTFjNzUyMzRjZGUzYWYxNDRjMTljMTE3N2Q2dnAXGQ==: 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:05.851 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.418 00:19:06.418 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:06.418 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:06.418 01:21:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:06.677 { 00:19:06.677 "cntlid": 143, 00:19:06.677 "qid": 0, 00:19:06.677 "state": "enabled", 00:19:06.677 "listen_address": { 00:19:06.677 "trtype": "TCP", 00:19:06.677 "adrfam": "IPv4", 00:19:06.677 "traddr": "10.0.0.2", 00:19:06.677 "trsvcid": "4420" 00:19:06.677 }, 00:19:06.677 "peer_address": { 00:19:06.677 "trtype": "TCP", 00:19:06.677 "adrfam": "IPv4", 00:19:06.677 "traddr": "10.0.0.1", 00:19:06.677 "trsvcid": "45688" 00:19:06.677 }, 00:19:06.677 "auth": { 00:19:06.677 "state": "completed", 00:19:06.677 "digest": "sha512", 00:19:06.677 "dhgroup": "ffdhe8192" 00:19:06.677 } 00:19:06.677 } 00:19:06.677 ]' 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.677 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.935 01:21:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:YWIwYjhlZDliNmUxMmYxYjQzZmQ5YTRmOTViNDgwNjE2ZDUyN2Q0MWE2ZTg5OWY1ODExMzY1YzUzMDFkNzMwNvZU+5s=: 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:07.503 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:07.762 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:08.026 00:19:08.026 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:08.026 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.026 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:08.289 { 00:19:08.289 "cntlid": 145, 00:19:08.289 "qid": 0, 00:19:08.289 "state": "enabled", 00:19:08.289 "listen_address": { 00:19:08.289 "trtype": "TCP", 00:19:08.289 "adrfam": "IPv4", 00:19:08.289 "traddr": "10.0.0.2", 00:19:08.289 "trsvcid": "4420" 00:19:08.289 }, 00:19:08.289 "peer_address": { 00:19:08.289 "trtype": "TCP", 00:19:08.289 "adrfam": "IPv4", 00:19:08.289 "traddr": "10.0.0.1", 00:19:08.289 "trsvcid": "45716" 00:19:08.289 }, 00:19:08.289 "auth": { 00:19:08.289 "state": "completed", 00:19:08.289 "digest": "sha512", 00:19:08.289 "dhgroup": "ffdhe8192" 00:19:08.289 } 00:19:08.289 } 00:19:08.289 ]' 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.289 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:08.547 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.547 01:21:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:08.547 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.547 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.547 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.547 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:NmI0ZjhmNDYzOGVlODY0OWJlMzNkMWNhZjNjZmVjZjAxZjIyOTE2YTUzMzYwOGU1lpyeog==: 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:09.114 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:09.115 01:21:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:09.709 request: 00:19:09.710 { 00:19:09.710 "name": "nvme0", 00:19:09.710 "trtype": "tcp", 00:19:09.710 "traddr": "10.0.0.2", 00:19:09.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:09.710 "adrfam": "ipv4", 00:19:09.710 "trsvcid": "4420", 00:19:09.710 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.710 "dhchap_key": "key2", 00:19:09.710 "method": "bdev_nvme_attach_controller", 00:19:09.710 "req_id": 1 00:19:09.710 } 00:19:09.710 Got JSON-RPC error response 00:19:09.710 response: 00:19:09.710 { 00:19:09.710 "code": -32602, 00:19:09.710 "message": "Invalid parameters" 00:19:09.710 } 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 4102776 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 4102776 ']' 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 4102776 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4102776 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4102776' 00:19:09.710 killing process with pid 4102776 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 4102776 00:19:09.710 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 4102776 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:09.967 rmmod nvme_tcp 00:19:09.967 rmmod nvme_fabrics 00:19:09.967 rmmod nvme_keyring 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 4102535 ']' 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 4102535 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@946 -- # '[' -z 4102535 ']' 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@950 -- # kill -0 4102535 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # uname 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:09.967 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4102535 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4102535' 00:19:10.225 killing process with pid 4102535 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # kill 4102535 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@970 -- # wait 4102535 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.225 01:21:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.751 01:21:47 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:12.751 01:21:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RZi /tmp/spdk.key-sha256.ZXb /tmp/spdk.key-sha384.oFl /tmp/spdk.key-sha512.CuZ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:12.751 00:19:12.751 real 2m4.191s 00:19:12.751 user 4m34.802s 00:19:12.751 sys 0m28.054s 00:19:12.751 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:12.751 01:21:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.751 ************************************ 00:19:12.751 END TEST nvmf_auth_target 00:19:12.751 ************************************ 00:19:12.751 01:21:48 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:12.751 01:21:48 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:12.751 01:21:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:19:12.751 01:21:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:12.751 01:21:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:12.751 ************************************ 00:19:12.751 START TEST nvmf_bdevio_no_huge 00:19:12.751 ************************************ 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:12.751 * Looking for test storage... 00:19:12.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.751 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:12.752 01:21:48 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:19.313 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:19.313 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:19.313 Found net devices under 0000:af:00.0: cvl_0_0 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:19.313 Found net devices under 0000:af:00.1: cvl_0_1 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.313 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:19.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:19:19.314 00:19:19.314 --- 10.0.0.2 ping statistics --- 00:19:19.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.314 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:19:19.314 00:19:19.314 --- 10.0.0.1 ping statistics --- 00:19:19.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.314 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=4127410 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 4127410 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 4127410 ']' 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:19.314 01:21:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:19.314 [2024-05-15 01:21:54.826817] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:19.314 [2024-05-15 01:21:54.826866] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:19.314 [2024-05-15 01:21:54.903668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:19.314 [2024-05-15 01:21:54.999166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.314 [2024-05-15 01:21:54.999207] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.314 [2024-05-15 01:21:54.999216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.314 [2024-05-15 01:21:54.999241] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.314 [2024-05-15 01:21:54.999248] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.314 [2024-05-15 01:21:54.999366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:19.314 [2024-05-15 01:21:54.999466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:19.314 [2024-05-15 01:21:54.999551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:19.314 [2024-05-15 01:21:54.999552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.244 [2024-05-15 01:21:55.699914] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.244 Malloc0 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:20.244 [2024-05-15 01:21:55.744538] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:20.244 [2024-05-15 01:21:55.744795] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:20.244 { 00:19:20.244 "params": { 00:19:20.244 "name": "Nvme$subsystem", 00:19:20.244 "trtype": "$TEST_TRANSPORT", 00:19:20.244 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:20.244 "adrfam": "ipv4", 00:19:20.244 "trsvcid": "$NVMF_PORT", 00:19:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:20.244 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:20.244 "hdgst": ${hdgst:-false}, 00:19:20.244 "ddgst": ${ddgst:-false} 00:19:20.244 }, 00:19:20.244 "method": "bdev_nvme_attach_controller" 00:19:20.244 } 00:19:20.244 EOF 00:19:20.244 )") 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:19:20.244 01:21:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:20.244 "params": { 00:19:20.244 "name": "Nvme1", 00:19:20.244 "trtype": "tcp", 00:19:20.244 "traddr": "10.0.0.2", 00:19:20.244 "adrfam": "ipv4", 00:19:20.244 "trsvcid": "4420", 00:19:20.244 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:20.244 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:20.244 "hdgst": false, 00:19:20.244 "ddgst": false 00:19:20.244 }, 00:19:20.244 "method": "bdev_nvme_attach_controller" 00:19:20.244 }' 00:19:20.244 [2024-05-15 01:21:55.796654] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:20.244 [2024-05-15 01:21:55.796705] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid4127512 ] 00:19:20.244 [2024-05-15 01:21:55.873262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.501 [2024-05-15 01:21:55.974722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.501 [2024-05-15 01:21:55.974807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.501 [2024-05-15 01:21:55.974818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.501 I/O targets: 00:19:20.501 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:20.501 00:19:20.501 00:19:20.501 CUnit - A unit testing framework for C - Version 2.1-3 00:19:20.501 http://cunit.sourceforge.net/ 00:19:20.501 00:19:20.501 00:19:20.501 Suite: bdevio tests on: Nvme1n1 00:19:20.501 Test: blockdev write read block ...passed 00:19:20.757 Test: blockdev write zeroes read block ...passed 00:19:20.757 Test: blockdev write zeroes read no split ...passed 00:19:20.757 Test: blockdev write zeroes read split ...passed 00:19:20.757 Test: blockdev write zeroes read split partial ...passed 00:19:20.757 Test: blockdev reset ...[2024-05-15 01:21:56.377557] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:20.757 [2024-05-15 01:21:56.377623] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f0910 (9): Bad file descriptor 00:19:20.757 [2024-05-15 01:21:56.392810] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:20.757 passed 00:19:20.757 Test: blockdev write read 8 blocks ...passed 00:19:20.757 Test: blockdev write read size > 128k ...passed 00:19:20.757 Test: blockdev write read invalid size ...passed 00:19:20.757 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.757 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.757 Test: blockdev write read max offset ...passed 00:19:21.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.015 Test: blockdev writev readv 8 blocks ...passed 00:19:21.015 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.015 Test: blockdev writev readv block ...passed 00:19:21.015 Test: blockdev writev readv size > 128k ...passed 00:19:21.015 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.015 Test: blockdev comparev and writev ...[2024-05-15 01:21:56.576839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.576872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.576889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.576899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.577313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.577326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.577340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.577350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.577767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.577780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.577795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.577805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.578227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.578241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.578255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:21.015 [2024-05-15 01:21:56.578266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:21.015 passed 00:19:21.015 Test: blockdev nvme passthru rw ...passed 00:19:21.015 Test: blockdev nvme passthru vendor specific ...[2024-05-15 01:21:56.661944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.015 [2024-05-15 01:21:56.661962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.662265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.015 [2024-05-15 01:21:56.662278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.662574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.015 [2024-05-15 01:21:56.662587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:21.015 [2024-05-15 01:21:56.662879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:21.015 [2024-05-15 01:21:56.662892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:21.015 passed 00:19:21.015 Test: blockdev nvme admin passthru ...passed 00:19:21.273 Test: blockdev copy ...passed 00:19:21.273 00:19:21.273 Run Summary: Type Total Ran Passed Failed Inactive 00:19:21.273 suites 1 1 n/a 0 0 00:19:21.273 tests 23 23 23 0 0 00:19:21.273 asserts 152 152 152 0 n/a 00:19:21.273 00:19:21.273 Elapsed time = 1.176 seconds 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:21.530 rmmod nvme_tcp 00:19:21.530 rmmod nvme_fabrics 00:19:21.530 rmmod nvme_keyring 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 4127410 ']' 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 4127410 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 4127410 ']' 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 4127410 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4127410 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4127410' 00:19:21.530 killing process with pid 4127410 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 4127410 00:19:21.530 [2024-05-15 01:21:57.193540] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:21.530 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 4127410 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.096 01:21:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.997 01:21:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:23.997 00:19:23.997 real 0m11.570s 00:19:23.997 user 0m13.771s 00:19:23.997 sys 0m6.090s 00:19:23.997 01:21:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:23.997 01:21:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:23.997 ************************************ 00:19:23.997 END TEST nvmf_bdevio_no_huge 00:19:23.997 ************************************ 00:19:24.255 01:21:59 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.255 01:21:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:24.255 01:21:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:24.255 01:21:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:24.255 ************************************ 00:19:24.255 START TEST nvmf_tls 00:19:24.255 ************************************ 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:24.255 * Looking for test storage... 00:19:24.255 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.255 01:21:59 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:19:24.256 01:21:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:30.815 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:30.815 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:30.815 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:30.816 Found net devices under 0000:af:00.0: cvl_0_0 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:30.816 Found net devices under 0000:af:00.1: cvl_0_1 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:30.816 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:31.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:19:31.074 00:19:31.074 --- 10.0.0.2 ping statistics --- 00:19:31.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.074 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:31.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.251 ms 00:19:31.074 00:19:31.074 --- 10.0.0.1 ping statistics --- 00:19:31.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.074 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4131408 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4131408 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4131408 ']' 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:31.074 01:22:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.332 [2024-05-15 01:22:06.778607] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:31.332 [2024-05-15 01:22:06.778655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.332 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.332 [2024-05-15 01:22:06.854705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.332 [2024-05-15 01:22:06.932112] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.332 [2024-05-15 01:22:06.932150] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.332 [2024-05-15 01:22:06.932160] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.332 [2024-05-15 01:22:06.932169] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.332 [2024-05-15 01:22:06.932176] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.332 [2024-05-15 01:22:06.932203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.899 01:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:31.899 01:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:31.899 01:22:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:31.899 01:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.899 01:22:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.219 01:22:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.219 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:19:32.219 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:32.219 true 00:19:32.219 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.219 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:19:32.498 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:19:32.498 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:19:32.498 01:22:07 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:32.498 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:32.498 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:19:32.757 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:19:32.757 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:19:32.757 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:33.015 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.015 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:19:33.015 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:19:33.015 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:19:33.015 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.015 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:19:33.273 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:19:33.273 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:19:33.273 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:33.532 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.532 01:22:08 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:19:33.532 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:19:33.532 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:19:33.532 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:33.790 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:33.790 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.Y6GuQNKis3 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.XAxzz0LVxF 00:19:34.051 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:34.052 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:34.052 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.Y6GuQNKis3 00:19:34.052 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.XAxzz0LVxF 00:19:34.052 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:34.311 01:22:09 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:34.570 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.Y6GuQNKis3 00:19:34.570 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Y6GuQNKis3 00:19:34.570 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:34.570 [2024-05-15 01:22:10.165388] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.570 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:34.829 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:34.829 [2024-05-15 01:22:10.510244] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:34.829 [2024-05-15 01:22:10.510314] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.829 [2024-05-15 01:22:10.510532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.087 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.087 malloc0 00:19:35.087 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.345 01:22:10 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y6GuQNKis3 00:19:35.345 [2024-05-15 01:22:11.016139] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:35.345 01:22:11 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Y6GuQNKis3 00:19:35.603 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.566 Initializing NVMe Controllers 00:19:45.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:45.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:45.566 Initialization complete. Launching workers. 00:19:45.566 ======================================================== 00:19:45.566 Latency(us) 00:19:45.566 Device Information : IOPS MiB/s Average min max 00:19:45.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16555.93 64.67 3866.13 752.21 5573.86 00:19:45.566 ======================================================== 00:19:45.566 Total : 16555.93 64.67 3866.13 752.21 5573.86 00:19:45.566 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y6GuQNKis3 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y6GuQNKis3' 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4133956 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4133956 /var/tmp/bdevperf.sock 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4133956 ']' 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:45.566 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.566 [2024-05-15 01:22:21.182343] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:45.566 [2024-05-15 01:22:21.182395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4133956 ] 00:19:45.566 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.566 [2024-05-15 01:22:21.249834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.824 [2024-05-15 01:22:21.319427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.389 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:46.389 01:22:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:46.389 01:22:21 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y6GuQNKis3 00:19:46.646 [2024-05-15 01:22:22.146062] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.646 [2024-05-15 01:22:22.146142] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:46.646 TLSTESTn1 00:19:46.646 01:22:22 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:46.646 Running I/O for 10 seconds... 00:19:58.846 00:19:58.846 Latency(us) 00:19:58.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.846 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:58.846 Verification LBA range: start 0x0 length 0x2000 00:19:58.846 TLSTESTn1 : 10.05 2032.77 7.94 0.00 0.00 62815.91 7077.89 109890.76 00:19:58.846 =================================================================================================================== 00:19:58.846 Total : 2032.77 7.94 0.00 0.00 62815.91 7077.89 109890.76 00:19:58.846 0 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4133956 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4133956 ']' 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4133956 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4133956 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4133956' 00:19:58.846 killing process with pid 4133956 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4133956 00:19:58.846 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.846 00:19:58.846 Latency(us) 00:19:58.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.846 =================================================================================================================== 00:19:58.846 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:58.846 [2024-05-15 01:22:32.472023] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4133956 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XAxzz0LVxF 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XAxzz0LVxF 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XAxzz0LVxF 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XAxzz0LVxF' 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4135982 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4135982 /var/tmp/bdevperf.sock 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4135982 ']' 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.846 01:22:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.846 [2024-05-15 01:22:32.704044] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:58.846 [2024-05-15 01:22:32.704098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4135982 ] 00:19:58.846 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.846 [2024-05-15 01:22:32.767789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.847 [2024-05-15 01:22:32.836143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XAxzz0LVxF 00:19:58.847 [2024-05-15 01:22:33.670529] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:58.847 [2024-05-15 01:22:33.670608] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:58.847 [2024-05-15 01:22:33.675206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:58.847 [2024-05-15 01:22:33.675811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0610 (107): Transport endpoint is not connected 00:19:58.847 [2024-05-15 01:22:33.676803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0610 (9): Bad file descriptor 00:19:58.847 [2024-05-15 01:22:33.677804] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:58.847 [2024-05-15 01:22:33.677817] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:58.847 [2024-05-15 01:22:33.677828] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:58.847 request: 00:19:58.847 { 00:19:58.847 "name": "TLSTEST", 00:19:58.847 "trtype": "tcp", 00:19:58.847 "traddr": "10.0.0.2", 00:19:58.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.847 "adrfam": "ipv4", 00:19:58.847 "trsvcid": "4420", 00:19:58.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.847 "psk": "/tmp/tmp.XAxzz0LVxF", 00:19:58.847 "method": "bdev_nvme_attach_controller", 00:19:58.847 "req_id": 1 00:19:58.847 } 00:19:58.847 Got JSON-RPC error response 00:19:58.847 response: 00:19:58.847 { 00:19:58.847 "code": -32602, 00:19:58.847 "message": "Invalid parameters" 00:19:58.847 } 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4135982 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4135982 ']' 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4135982 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4135982 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4135982' 00:19:58.847 killing process with pid 4135982 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4135982 00:19:58.847 Received shutdown signal, test time was about 10.000000 seconds 00:19:58.847 00:19:58.847 Latency(us) 00:19:58.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.847 =================================================================================================================== 00:19:58.847 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:58.847 [2024-05-15 01:22:33.749838] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4135982 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Y6GuQNKis3 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Y6GuQNKis3 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Y6GuQNKis3 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y6GuQNKis3' 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4136185 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4136185 /var/tmp/bdevperf.sock 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4136185 ']' 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:58.847 01:22:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.847 [2024-05-15 01:22:33.991071] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:58.847 [2024-05-15 01:22:33.991125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136185 ] 00:19:58.847 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.847 [2024-05-15 01:22:34.057940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.847 [2024-05-15 01:22:34.128257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.105 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:59.105 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:19:59.105 01:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.Y6GuQNKis3 00:19:59.364 [2024-05-15 01:22:34.921937] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.364 [2024-05-15 01:22:34.922020] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:59.364 [2024-05-15 01:22:34.927324] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:59.364 [2024-05-15 01:22:34.927349] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:59.364 [2024-05-15 01:22:34.927377] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.364 [2024-05-15 01:22:34.928419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea610 (107): Transport endpoint is not connected 00:19:59.364 [2024-05-15 01:22:34.929413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ea610 (9): Bad file descriptor 00:19:59.364 [2024-05-15 01:22:34.930414] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:59.364 [2024-05-15 01:22:34.930427] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.364 [2024-05-15 01:22:34.930438] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:59.364 request: 00:19:59.364 { 00:19:59.364 "name": "TLSTEST", 00:19:59.364 "trtype": "tcp", 00:19:59.364 "traddr": "10.0.0.2", 00:19:59.364 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:59.364 "adrfam": "ipv4", 00:19:59.364 "trsvcid": "4420", 00:19:59.364 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.364 "psk": "/tmp/tmp.Y6GuQNKis3", 00:19:59.364 "method": "bdev_nvme_attach_controller", 00:19:59.364 "req_id": 1 00:19:59.364 } 00:19:59.364 Got JSON-RPC error response 00:19:59.364 response: 00:19:59.364 { 00:19:59.364 "code": -32602, 00:19:59.364 "message": "Invalid parameters" 00:19:59.364 } 00:19:59.364 01:22:34 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4136185 00:19:59.364 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4136185 ']' 00:19:59.364 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4136185 00:19:59.364 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:19:59.364 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:59.364 01:22:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4136185 00:19:59.364 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:59.364 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:59.364 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4136185' 00:19:59.364 killing process with pid 4136185 00:19:59.364 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4136185 00:19:59.364 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.364 00:19:59.364 Latency(us) 00:19:59.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.364 =================================================================================================================== 00:19:59.364 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.364 [2024-05-15 01:22:35.003994] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:59.364 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4136185 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y6GuQNKis3 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y6GuQNKis3 00:19:59.622 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Y6GuQNKis3 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Y6GuQNKis3' 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4136306 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4136306 /var/tmp/bdevperf.sock 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4136306 ']' 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:59.623 01:22:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.623 [2024-05-15 01:22:35.247996] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:19:59.623 [2024-05-15 01:22:35.248047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136306 ] 00:19:59.623 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.881 [2024-05-15 01:22:35.314838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.881 [2024-05-15 01:22:35.381635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.446 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:00.446 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:00.446 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Y6GuQNKis3 00:20:00.704 [2024-05-15 01:22:36.200051] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.704 [2024-05-15 01:22:36.200128] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:00.704 [2024-05-15 01:22:36.210563] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:00.704 [2024-05-15 01:22:36.210587] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:00.704 [2024-05-15 01:22:36.210615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.704 [2024-05-15 01:22:36.211542] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980610 (107): Transport endpoint is not connected 00:20:00.704 [2024-05-15 01:22:36.212535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1980610 (9): Bad file descriptor 00:20:00.704 [2024-05-15 01:22:36.213537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:00.704 [2024-05-15 01:22:36.213549] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.705 [2024-05-15 01:22:36.213563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:00.705 request: 00:20:00.705 { 00:20:00.705 "name": "TLSTEST", 00:20:00.705 "trtype": "tcp", 00:20:00.705 "traddr": "10.0.0.2", 00:20:00.705 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.705 "adrfam": "ipv4", 00:20:00.705 "trsvcid": "4420", 00:20:00.705 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.705 "psk": "/tmp/tmp.Y6GuQNKis3", 00:20:00.705 "method": "bdev_nvme_attach_controller", 00:20:00.705 "req_id": 1 00:20:00.705 } 00:20:00.705 Got JSON-RPC error response 00:20:00.705 response: 00:20:00.705 { 00:20:00.705 "code": -32602, 00:20:00.705 "message": "Invalid parameters" 00:20:00.705 } 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4136306 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4136306 ']' 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4136306 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4136306 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4136306' 00:20:00.705 killing process with pid 4136306 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4136306 00:20:00.705 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.705 00:20:00.705 Latency(us) 00:20:00.705 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.705 =================================================================================================================== 00:20:00.705 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.705 [2024-05-15 01:22:36.288687] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:00.705 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4136306 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:00.962 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4136553 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4136553 /var/tmp/bdevperf.sock 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4136553 ']' 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:00.963 01:22:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:00.963 [2024-05-15 01:22:36.530581] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:00.963 [2024-05-15 01:22:36.530634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4136553 ] 00:20:00.963 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.963 [2024-05-15 01:22:36.596396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.220 [2024-05-15 01:22:36.660917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:01.787 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:01.787 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:01.787 01:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:02.087 [2024-05-15 01:22:37.483828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:02.087 [2024-05-15 01:22:37.485616] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd55cc0 (9): Bad file descriptor 00:20:02.087 [2024-05-15 01:22:37.486614] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:02.087 [2024-05-15 01:22:37.486627] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:02.087 [2024-05-15 01:22:37.486639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:02.087 request: 00:20:02.087 { 00:20:02.087 "name": "TLSTEST", 00:20:02.087 "trtype": "tcp", 00:20:02.087 "traddr": "10.0.0.2", 00:20:02.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.087 "adrfam": "ipv4", 00:20:02.087 "trsvcid": "4420", 00:20:02.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.087 "method": "bdev_nvme_attach_controller", 00:20:02.087 "req_id": 1 00:20:02.087 } 00:20:02.087 Got JSON-RPC error response 00:20:02.087 response: 00:20:02.087 { 00:20:02.087 "code": -32602, 00:20:02.087 "message": "Invalid parameters" 00:20:02.087 } 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4136553 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4136553 ']' 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4136553 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4136553 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4136553' 00:20:02.087 killing process with pid 4136553 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4136553 00:20:02.087 Received shutdown signal, test time was about 10.000000 seconds 00:20:02.087 00:20:02.087 Latency(us) 00:20:02.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:02.087 =================================================================================================================== 00:20:02.087 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4136553 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 4131408 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4131408 ']' 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4131408 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:02.087 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4131408 00:20:02.346 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:02.346 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:02.346 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4131408' 00:20:02.346 killing process with pid 4131408 00:20:02.346 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4131408 00:20:02.346 [2024-05-15 01:22:37.808414] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:02.346 [2024-05-15 01:22:37.808447] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:02.346 01:22:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4131408 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:02.346 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.LPIwG7wSST 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.LPIwG7wSST 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4136846 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4136846 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4136846 ']' 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:02.605 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:02.605 [2024-05-15 01:22:38.125563] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:02.605 [2024-05-15 01:22:38.125611] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.605 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.605 [2024-05-15 01:22:38.196558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.606 [2024-05-15 01:22:38.268356] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.606 [2024-05-15 01:22:38.268394] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.606 [2024-05-15 01:22:38.268404] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.606 [2024-05-15 01:22:38.268413] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.606 [2024-05-15 01:22:38.268420] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.606 [2024-05-15 01:22:38.268442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.LPIwG7wSST 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LPIwG7wSST 00:20:03.542 01:22:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:03.542 [2024-05-15 01:22:39.119911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.542 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:03.800 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.800 [2024-05-15 01:22:39.444703] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:03.800 [2024-05-15 01:22:39.444751] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.800 [2024-05-15 01:22:39.444935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.800 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:04.059 malloc0 00:20:04.059 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:04.319 [2024-05-15 01:22:39.938214] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LPIwG7wSST 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LPIwG7wSST' 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4137171 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4137171 /var/tmp/bdevperf.sock 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4137171 ']' 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:04.319 01:22:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.319 [2024-05-15 01:22:40.003086] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:04.320 [2024-05-15 01:22:40.003139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4137171 ] 00:20:04.578 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.578 [2024-05-15 01:22:40.071990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.578 [2024-05-15 01:22:40.154238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.143 01:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:05.143 01:22:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:05.143 01:22:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:05.402 [2024-05-15 01:22:40.949571] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.402 [2024-05-15 01:22:40.949652] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:05.402 TLSTESTn1 00:20:05.402 01:22:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.660 Running I/O for 10 seconds... 00:20:15.631 00:20:15.631 Latency(us) 00:20:15.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.631 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.631 Verification LBA range: start 0x0 length 0x2000 00:20:15.631 TLSTESTn1 : 10.05 2029.21 7.93 0.00 0.00 62933.80 4980.74 109051.90 00:20:15.631 =================================================================================================================== 00:20:15.631 Total : 2029.21 7.93 0.00 0.00 62933.80 4980.74 109051.90 00:20:15.631 0 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 4137171 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4137171 ']' 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4137171 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4137171 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4137171' 00:20:15.631 killing process with pid 4137171 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4137171 00:20:15.631 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.631 00:20:15.631 Latency(us) 00:20:15.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.631 =================================================================================================================== 00:20:15.631 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.631 [2024-05-15 01:22:51.268473] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:15.631 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4137171 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.LPIwG7wSST 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LPIwG7wSST 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LPIwG7wSST 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.LPIwG7wSST 00:20:15.889 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.LPIwG7wSST' 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=4139247 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 4139247 /var/tmp/bdevperf.sock 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4139247 ']' 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:15.890 01:22:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.890 [2024-05-15 01:22:51.529180] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:15.890 [2024-05-15 01:22:51.529238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4139247 ] 00:20:15.890 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.173 [2024-05-15 01:22:51.595769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.173 [2024-05-15 01:22:51.659593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.740 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:16.740 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:16.740 01:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:16.999 [2024-05-15 01:22:52.466103] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.999 [2024-05-15 01:22:52.466158] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:16.999 [2024-05-15 01:22:52.466167] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.LPIwG7wSST 00:20:16.999 request: 00:20:16.999 { 00:20:16.999 "name": "TLSTEST", 00:20:16.999 "trtype": "tcp", 00:20:16.999 "traddr": "10.0.0.2", 00:20:16.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.999 "adrfam": "ipv4", 00:20:16.999 "trsvcid": "4420", 00:20:16.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.999 "psk": "/tmp/tmp.LPIwG7wSST", 00:20:16.999 "method": "bdev_nvme_attach_controller", 00:20:16.999 "req_id": 1 00:20:16.999 } 00:20:16.999 Got JSON-RPC error response 00:20:16.999 response: 00:20:16.999 { 00:20:16.999 "code": -1, 00:20:16.999 "message": "Operation not permitted" 00:20:16.999 } 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 4139247 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4139247 ']' 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4139247 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4139247 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4139247' 00:20:16.999 killing process with pid 4139247 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4139247 00:20:16.999 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.999 00:20:16.999 Latency(us) 00:20:16.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.999 =================================================================================================================== 00:20:16.999 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.999 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4139247 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 4136846 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4136846 ']' 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4136846 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4136846 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4136846' 00:20:17.257 killing process with pid 4136846 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4136846 00:20:17.257 [2024-05-15 01:22:52.786846] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:17.257 [2024-05-15 01:22:52.786885] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.257 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4136846 00:20:17.516 01:22:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:20:17.516 01:22:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.516 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:17.516 01:22:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4139528 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4139528 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4139528 ']' 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:17.516 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.516 [2024-05-15 01:22:53.057754] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:17.516 [2024-05-15 01:22:53.057806] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.516 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.516 [2024-05-15 01:22:53.130001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.516 [2024-05-15 01:22:53.192137] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.516 [2024-05-15 01:22:53.192178] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.516 [2024-05-15 01:22:53.192187] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.516 [2024-05-15 01:22:53.192199] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.516 [2024-05-15 01:22:53.192206] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.516 [2024-05-15 01:22:53.192234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.LPIwG7wSST 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.LPIwG7wSST 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.LPIwG7wSST 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LPIwG7wSST 00:20:18.448 01:22:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:18.448 [2024-05-15 01:22:54.054983] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.448 01:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:18.706 01:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:18.964 [2024-05-15 01:22:54.407844] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:18.964 [2024-05-15 01:22:54.407903] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:18.964 [2024-05-15 01:22:54.408101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.964 01:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.964 malloc0 00:20:18.964 01:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.221 01:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:19.221 [2024-05-15 01:22:54.897514] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:19.221 [2024-05-15 01:22:54.897545] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:19.221 [2024-05-15 01:22:54.897571] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:19.221 request: 00:20:19.221 { 00:20:19.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.221 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.221 "psk": "/tmp/tmp.LPIwG7wSST", 00:20:19.221 "method": "nvmf_subsystem_add_host", 00:20:19.221 "req_id": 1 00:20:19.221 } 00:20:19.221 Got JSON-RPC error response 00:20:19.221 response: 00:20:19.221 { 00:20:19.221 "code": -32603, 00:20:19.221 "message": "Internal error" 00:20:19.221 } 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 4139528 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4139528 ']' 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4139528 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4139528 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4139528' 00:20:19.479 killing process with pid 4139528 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4139528 00:20:19.479 [2024-05-15 01:22:54.973766] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:19.479 01:22:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4139528 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.LPIwG7wSST 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4139842 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4139842 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4139842 ']' 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:19.738 01:22:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.738 [2024-05-15 01:22:55.245283] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:19.738 [2024-05-15 01:22:55.245334] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.738 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.738 [2024-05-15 01:22:55.316439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.738 [2024-05-15 01:22:55.378105] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.738 [2024-05-15 01:22:55.378148] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.738 [2024-05-15 01:22:55.378158] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.738 [2024-05-15 01:22:55.378166] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.738 [2024-05-15 01:22:55.378173] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.738 [2024-05-15 01:22:55.378199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.LPIwG7wSST 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LPIwG7wSST 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:20.673 [2024-05-15 01:22:56.248949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.673 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:20.931 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:20.931 [2024-05-15 01:22:56.585777] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:20.931 [2024-05-15 01:22:56.585836] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.931 [2024-05-15 01:22:56.586035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.931 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:21.188 malloc0 00:20:21.188 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:21.447 01:22:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:21.447 [2024-05-15 01:22:57.075398] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=4140128 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 4140128 /var/tmp/bdevperf.sock 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4140128 ']' 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.447 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:21.447 [2024-05-15 01:22:57.123347] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:21.447 [2024-05-15 01:22:57.123394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140128 ] 00:20:21.705 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.705 [2024-05-15 01:22:57.189353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.705 [2024-05-15 01:22:57.259563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.271 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:22.271 01:22:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:22.271 01:22:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:22.530 [2024-05-15 01:22:58.085771] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.530 [2024-05-15 01:22:58.085854] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:22.530 TLSTESTn1 00:20:22.530 01:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:22.789 01:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:20:22.789 "subsystems": [ 00:20:22.789 { 00:20:22.789 "subsystem": "keyring", 00:20:22.789 "config": [] 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "subsystem": "iobuf", 00:20:22.789 "config": [ 00:20:22.789 { 00:20:22.789 "method": "iobuf_set_options", 00:20:22.789 "params": { 00:20:22.789 "small_pool_count": 8192, 00:20:22.789 "large_pool_count": 1024, 00:20:22.789 "small_bufsize": 8192, 00:20:22.789 "large_bufsize": 135168 00:20:22.789 } 00:20:22.789 } 00:20:22.789 ] 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "subsystem": "sock", 00:20:22.789 "config": [ 00:20:22.789 { 00:20:22.789 "method": "sock_impl_set_options", 00:20:22.789 "params": { 00:20:22.789 "impl_name": "posix", 00:20:22.789 "recv_buf_size": 2097152, 00:20:22.789 "send_buf_size": 2097152, 00:20:22.789 "enable_recv_pipe": true, 00:20:22.789 "enable_quickack": false, 00:20:22.789 "enable_placement_id": 0, 00:20:22.789 "enable_zerocopy_send_server": true, 00:20:22.789 "enable_zerocopy_send_client": false, 00:20:22.789 "zerocopy_threshold": 0, 00:20:22.789 "tls_version": 0, 00:20:22.789 "enable_ktls": false 00:20:22.789 } 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "method": "sock_impl_set_options", 00:20:22.789 "params": { 00:20:22.789 "impl_name": "ssl", 00:20:22.789 "recv_buf_size": 4096, 00:20:22.789 "send_buf_size": 4096, 00:20:22.789 "enable_recv_pipe": true, 00:20:22.789 "enable_quickack": false, 00:20:22.789 "enable_placement_id": 0, 00:20:22.789 "enable_zerocopy_send_server": true, 00:20:22.789 "enable_zerocopy_send_client": false, 00:20:22.789 "zerocopy_threshold": 0, 00:20:22.789 "tls_version": 0, 00:20:22.789 "enable_ktls": false 00:20:22.789 } 00:20:22.789 } 00:20:22.789 ] 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "subsystem": "vmd", 00:20:22.789 "config": [] 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "subsystem": "accel", 00:20:22.789 "config": [ 00:20:22.789 { 00:20:22.789 "method": "accel_set_options", 00:20:22.789 "params": { 00:20:22.789 "small_cache_size": 128, 00:20:22.789 "large_cache_size": 16, 00:20:22.789 "task_count": 2048, 00:20:22.789 "sequence_count": 2048, 00:20:22.789 "buf_count": 2048 00:20:22.789 } 00:20:22.789 } 00:20:22.789 ] 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "subsystem": "bdev", 00:20:22.789 "config": [ 00:20:22.789 { 00:20:22.789 "method": "bdev_set_options", 00:20:22.789 "params": { 00:20:22.789 "bdev_io_pool_size": 65535, 00:20:22.789 "bdev_io_cache_size": 256, 00:20:22.789 "bdev_auto_examine": true, 00:20:22.789 "iobuf_small_cache_size": 128, 00:20:22.789 "iobuf_large_cache_size": 16 00:20:22.789 } 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "method": "bdev_raid_set_options", 00:20:22.789 "params": { 00:20:22.789 "process_window_size_kb": 1024 00:20:22.789 } 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "method": "bdev_iscsi_set_options", 00:20:22.789 "params": { 00:20:22.789 "timeout_sec": 30 00:20:22.789 } 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "method": "bdev_nvme_set_options", 00:20:22.789 "params": { 00:20:22.789 "action_on_timeout": "none", 00:20:22.789 "timeout_us": 0, 00:20:22.789 "timeout_admin_us": 0, 00:20:22.789 "keep_alive_timeout_ms": 10000, 00:20:22.789 "arbitration_burst": 0, 00:20:22.789 "low_priority_weight": 0, 00:20:22.789 "medium_priority_weight": 0, 00:20:22.789 "high_priority_weight": 0, 00:20:22.789 "nvme_adminq_poll_period_us": 10000, 00:20:22.789 "nvme_ioq_poll_period_us": 0, 00:20:22.789 "io_queue_requests": 0, 00:20:22.789 "delay_cmd_submit": true, 00:20:22.789 "transport_retry_count": 4, 00:20:22.789 "bdev_retry_count": 3, 00:20:22.789 "transport_ack_timeout": 0, 00:20:22.789 "ctrlr_loss_timeout_sec": 0, 00:20:22.789 "reconnect_delay_sec": 0, 00:20:22.789 "fast_io_fail_timeout_sec": 0, 00:20:22.789 "disable_auto_failback": false, 00:20:22.789 "generate_uuids": false, 00:20:22.789 "transport_tos": 0, 00:20:22.789 "nvme_error_stat": false, 00:20:22.789 "rdma_srq_size": 0, 00:20:22.789 "io_path_stat": false, 00:20:22.789 "allow_accel_sequence": false, 00:20:22.789 "rdma_max_cq_size": 0, 00:20:22.789 "rdma_cm_event_timeout_ms": 0, 00:20:22.789 "dhchap_digests": [ 00:20:22.789 "sha256", 00:20:22.789 "sha384", 00:20:22.789 "sha512" 00:20:22.789 ], 00:20:22.789 "dhchap_dhgroups": [ 00:20:22.789 "null", 00:20:22.789 "ffdhe2048", 00:20:22.789 "ffdhe3072", 00:20:22.789 "ffdhe4096", 00:20:22.789 "ffdhe6144", 00:20:22.789 "ffdhe8192" 00:20:22.789 ] 00:20:22.789 } 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "method": "bdev_nvme_set_hotplug", 00:20:22.789 "params": { 00:20:22.789 "period_us": 100000, 00:20:22.789 "enable": false 00:20:22.789 } 00:20:22.789 }, 00:20:22.789 { 00:20:22.789 "method": "bdev_malloc_create", 00:20:22.789 "params": { 00:20:22.789 "name": "malloc0", 00:20:22.789 "num_blocks": 8192, 00:20:22.789 "block_size": 4096, 00:20:22.789 "physical_block_size": 4096, 00:20:22.789 "uuid": "ab445ab8-084e-4a1a-95ba-bcf03cc4162a", 00:20:22.789 "optimal_io_boundary": 0 00:20:22.789 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "bdev_wait_for_examine" 00:20:22.790 } 00:20:22.790 ] 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "subsystem": "nbd", 00:20:22.790 "config": [] 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "subsystem": "scheduler", 00:20:22.790 "config": [ 00:20:22.790 { 00:20:22.790 "method": "framework_set_scheduler", 00:20:22.790 "params": { 00:20:22.790 "name": "static" 00:20:22.790 } 00:20:22.790 } 00:20:22.790 ] 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "subsystem": "nvmf", 00:20:22.790 "config": [ 00:20:22.790 { 00:20:22.790 "method": "nvmf_set_config", 00:20:22.790 "params": { 00:20:22.790 "discovery_filter": "match_any", 00:20:22.790 "admin_cmd_passthru": { 00:20:22.790 "identify_ctrlr": false 00:20:22.790 } 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_set_max_subsystems", 00:20:22.790 "params": { 00:20:22.790 "max_subsystems": 1024 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_set_crdt", 00:20:22.790 "params": { 00:20:22.790 "crdt1": 0, 00:20:22.790 "crdt2": 0, 00:20:22.790 "crdt3": 0 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_create_transport", 00:20:22.790 "params": { 00:20:22.790 "trtype": "TCP", 00:20:22.790 "max_queue_depth": 128, 00:20:22.790 "max_io_qpairs_per_ctrlr": 127, 00:20:22.790 "in_capsule_data_size": 4096, 00:20:22.790 "max_io_size": 131072, 00:20:22.790 "io_unit_size": 131072, 00:20:22.790 "max_aq_depth": 128, 00:20:22.790 "num_shared_buffers": 511, 00:20:22.790 "buf_cache_size": 4294967295, 00:20:22.790 "dif_insert_or_strip": false, 00:20:22.790 "zcopy": false, 00:20:22.790 "c2h_success": false, 00:20:22.790 "sock_priority": 0, 00:20:22.790 "abort_timeout_sec": 1, 00:20:22.790 "ack_timeout": 0, 00:20:22.790 "data_wr_pool_size": 0 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_create_subsystem", 00:20:22.790 "params": { 00:20:22.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.790 "allow_any_host": false, 00:20:22.790 "serial_number": "SPDK00000000000001", 00:20:22.790 "model_number": "SPDK bdev Controller", 00:20:22.790 "max_namespaces": 10, 00:20:22.790 "min_cntlid": 1, 00:20:22.790 "max_cntlid": 65519, 00:20:22.790 "ana_reporting": false 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_subsystem_add_host", 00:20:22.790 "params": { 00:20:22.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.790 "host": "nqn.2016-06.io.spdk:host1", 00:20:22.790 "psk": "/tmp/tmp.LPIwG7wSST" 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_subsystem_add_ns", 00:20:22.790 "params": { 00:20:22.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.790 "namespace": { 00:20:22.790 "nsid": 1, 00:20:22.790 "bdev_name": "malloc0", 00:20:22.790 "nguid": "AB445AB8084E4A1A95BABCF03CC4162A", 00:20:22.790 "uuid": "ab445ab8-084e-4a1a-95ba-bcf03cc4162a", 00:20:22.790 "no_auto_visible": false 00:20:22.790 } 00:20:22.790 } 00:20:22.790 }, 00:20:22.790 { 00:20:22.790 "method": "nvmf_subsystem_add_listener", 00:20:22.790 "params": { 00:20:22.790 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.790 "listen_address": { 00:20:22.790 "trtype": "TCP", 00:20:22.790 "adrfam": "IPv4", 00:20:22.790 "traddr": "10.0.0.2", 00:20:22.790 "trsvcid": "4420" 00:20:22.790 }, 00:20:22.790 "secure_channel": true 00:20:22.790 } 00:20:22.790 } 00:20:22.790 ] 00:20:22.790 } 00:20:22.790 ] 00:20:22.790 }' 00:20:22.790 01:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:23.101 01:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:20:23.101 "subsystems": [ 00:20:23.101 { 00:20:23.101 "subsystem": "keyring", 00:20:23.101 "config": [] 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "subsystem": "iobuf", 00:20:23.101 "config": [ 00:20:23.101 { 00:20:23.101 "method": "iobuf_set_options", 00:20:23.101 "params": { 00:20:23.101 "small_pool_count": 8192, 00:20:23.101 "large_pool_count": 1024, 00:20:23.101 "small_bufsize": 8192, 00:20:23.101 "large_bufsize": 135168 00:20:23.101 } 00:20:23.101 } 00:20:23.101 ] 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "subsystem": "sock", 00:20:23.101 "config": [ 00:20:23.101 { 00:20:23.101 "method": "sock_impl_set_options", 00:20:23.101 "params": { 00:20:23.101 "impl_name": "posix", 00:20:23.101 "recv_buf_size": 2097152, 00:20:23.101 "send_buf_size": 2097152, 00:20:23.101 "enable_recv_pipe": true, 00:20:23.101 "enable_quickack": false, 00:20:23.101 "enable_placement_id": 0, 00:20:23.101 "enable_zerocopy_send_server": true, 00:20:23.101 "enable_zerocopy_send_client": false, 00:20:23.101 "zerocopy_threshold": 0, 00:20:23.101 "tls_version": 0, 00:20:23.101 "enable_ktls": false 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "sock_impl_set_options", 00:20:23.101 "params": { 00:20:23.101 "impl_name": "ssl", 00:20:23.101 "recv_buf_size": 4096, 00:20:23.101 "send_buf_size": 4096, 00:20:23.101 "enable_recv_pipe": true, 00:20:23.101 "enable_quickack": false, 00:20:23.101 "enable_placement_id": 0, 00:20:23.101 "enable_zerocopy_send_server": true, 00:20:23.101 "enable_zerocopy_send_client": false, 00:20:23.101 "zerocopy_threshold": 0, 00:20:23.101 "tls_version": 0, 00:20:23.101 "enable_ktls": false 00:20:23.101 } 00:20:23.101 } 00:20:23.101 ] 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "subsystem": "vmd", 00:20:23.101 "config": [] 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "subsystem": "accel", 00:20:23.101 "config": [ 00:20:23.101 { 00:20:23.101 "method": "accel_set_options", 00:20:23.101 "params": { 00:20:23.101 "small_cache_size": 128, 00:20:23.101 "large_cache_size": 16, 00:20:23.101 "task_count": 2048, 00:20:23.101 "sequence_count": 2048, 00:20:23.101 "buf_count": 2048 00:20:23.101 } 00:20:23.101 } 00:20:23.101 ] 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "subsystem": "bdev", 00:20:23.101 "config": [ 00:20:23.101 { 00:20:23.101 "method": "bdev_set_options", 00:20:23.101 "params": { 00:20:23.101 "bdev_io_pool_size": 65535, 00:20:23.101 "bdev_io_cache_size": 256, 00:20:23.101 "bdev_auto_examine": true, 00:20:23.101 "iobuf_small_cache_size": 128, 00:20:23.101 "iobuf_large_cache_size": 16 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "bdev_raid_set_options", 00:20:23.101 "params": { 00:20:23.101 "process_window_size_kb": 1024 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "bdev_iscsi_set_options", 00:20:23.101 "params": { 00:20:23.101 "timeout_sec": 30 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "bdev_nvme_set_options", 00:20:23.101 "params": { 00:20:23.101 "action_on_timeout": "none", 00:20:23.101 "timeout_us": 0, 00:20:23.101 "timeout_admin_us": 0, 00:20:23.101 "keep_alive_timeout_ms": 10000, 00:20:23.101 "arbitration_burst": 0, 00:20:23.101 "low_priority_weight": 0, 00:20:23.101 "medium_priority_weight": 0, 00:20:23.101 "high_priority_weight": 0, 00:20:23.101 "nvme_adminq_poll_period_us": 10000, 00:20:23.101 "nvme_ioq_poll_period_us": 0, 00:20:23.101 "io_queue_requests": 512, 00:20:23.101 "delay_cmd_submit": true, 00:20:23.101 "transport_retry_count": 4, 00:20:23.101 "bdev_retry_count": 3, 00:20:23.101 "transport_ack_timeout": 0, 00:20:23.101 "ctrlr_loss_timeout_sec": 0, 00:20:23.101 "reconnect_delay_sec": 0, 00:20:23.101 "fast_io_fail_timeout_sec": 0, 00:20:23.101 "disable_auto_failback": false, 00:20:23.101 "generate_uuids": false, 00:20:23.101 "transport_tos": 0, 00:20:23.101 "nvme_error_stat": false, 00:20:23.101 "rdma_srq_size": 0, 00:20:23.101 "io_path_stat": false, 00:20:23.101 "allow_accel_sequence": false, 00:20:23.101 "rdma_max_cq_size": 0, 00:20:23.101 "rdma_cm_event_timeout_ms": 0, 00:20:23.101 "dhchap_digests": [ 00:20:23.101 "sha256", 00:20:23.101 "sha384", 00:20:23.101 "sha512" 00:20:23.101 ], 00:20:23.101 "dhchap_dhgroups": [ 00:20:23.101 "null", 00:20:23.101 "ffdhe2048", 00:20:23.101 "ffdhe3072", 00:20:23.101 "ffdhe4096", 00:20:23.101 "ffdhe6144", 00:20:23.101 "ffdhe8192" 00:20:23.101 ] 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "bdev_nvme_attach_controller", 00:20:23.101 "params": { 00:20:23.101 "name": "TLSTEST", 00:20:23.101 "trtype": "TCP", 00:20:23.101 "adrfam": "IPv4", 00:20:23.101 "traddr": "10.0.0.2", 00:20:23.101 "trsvcid": "4420", 00:20:23.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.101 "prchk_reftag": false, 00:20:23.101 "prchk_guard": false, 00:20:23.101 "ctrlr_loss_timeout_sec": 0, 00:20:23.101 "reconnect_delay_sec": 0, 00:20:23.101 "fast_io_fail_timeout_sec": 0, 00:20:23.101 "psk": "/tmp/tmp.LPIwG7wSST", 00:20:23.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.101 "hdgst": false, 00:20:23.101 "ddgst": false 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "bdev_nvme_set_hotplug", 00:20:23.101 "params": { 00:20:23.101 "period_us": 100000, 00:20:23.101 "enable": false 00:20:23.101 } 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "method": "bdev_wait_for_examine" 00:20:23.101 } 00:20:23.101 ] 00:20:23.101 }, 00:20:23.101 { 00:20:23.101 "subsystem": "nbd", 00:20:23.101 "config": [] 00:20:23.101 } 00:20:23.101 ] 00:20:23.101 }' 00:20:23.101 01:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 4140128 00:20:23.101 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4140128 ']' 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4140128 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4140128 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4140128' 00:20:23.102 killing process with pid 4140128 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4140128 00:20:23.102 Received shutdown signal, test time was about 10.000000 seconds 00:20:23.102 00:20:23.102 Latency(us) 00:20:23.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.102 =================================================================================================================== 00:20:23.102 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:23.102 [2024-05-15 01:22:58.732589] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:23.102 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4140128 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 4139842 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4139842 ']' 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4139842 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4139842 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4139842' 00:20:23.383 killing process with pid 4139842 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4139842 00:20:23.383 [2024-05-15 01:22:58.988982] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:23.383 [2024-05-15 01:22:58.989019] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:23.383 01:22:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4139842 00:20:23.642 01:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:23.642 01:22:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.642 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:23.642 01:22:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:20:23.642 "subsystems": [ 00:20:23.642 { 00:20:23.642 "subsystem": "keyring", 00:20:23.642 "config": [] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "iobuf", 00:20:23.642 "config": [ 00:20:23.642 { 00:20:23.642 "method": "iobuf_set_options", 00:20:23.642 "params": { 00:20:23.642 "small_pool_count": 8192, 00:20:23.642 "large_pool_count": 1024, 00:20:23.642 "small_bufsize": 8192, 00:20:23.642 "large_bufsize": 135168 00:20:23.642 } 00:20:23.642 } 00:20:23.642 ] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "sock", 00:20:23.642 "config": [ 00:20:23.642 { 00:20:23.642 "method": "sock_impl_set_options", 00:20:23.642 "params": { 00:20:23.642 "impl_name": "posix", 00:20:23.642 "recv_buf_size": 2097152, 00:20:23.642 "send_buf_size": 2097152, 00:20:23.642 "enable_recv_pipe": true, 00:20:23.642 "enable_quickack": false, 00:20:23.642 "enable_placement_id": 0, 00:20:23.642 "enable_zerocopy_send_server": true, 00:20:23.642 "enable_zerocopy_send_client": false, 00:20:23.642 "zerocopy_threshold": 0, 00:20:23.642 "tls_version": 0, 00:20:23.642 "enable_ktls": false 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "sock_impl_set_options", 00:20:23.642 "params": { 00:20:23.642 "impl_name": "ssl", 00:20:23.642 "recv_buf_size": 4096, 00:20:23.642 "send_buf_size": 4096, 00:20:23.642 "enable_recv_pipe": true, 00:20:23.642 "enable_quickack": false, 00:20:23.642 "enable_placement_id": 0, 00:20:23.642 "enable_zerocopy_send_server": true, 00:20:23.642 "enable_zerocopy_send_client": false, 00:20:23.642 "zerocopy_threshold": 0, 00:20:23.642 "tls_version": 0, 00:20:23.642 "enable_ktls": false 00:20:23.642 } 00:20:23.642 } 00:20:23.642 ] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "vmd", 00:20:23.642 "config": [] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "accel", 00:20:23.642 "config": [ 00:20:23.642 { 00:20:23.642 "method": "accel_set_options", 00:20:23.642 "params": { 00:20:23.642 "small_cache_size": 128, 00:20:23.642 "large_cache_size": 16, 00:20:23.642 "task_count": 2048, 00:20:23.642 "sequence_count": 2048, 00:20:23.642 "buf_count": 2048 00:20:23.642 } 00:20:23.642 } 00:20:23.642 ] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "bdev", 00:20:23.642 "config": [ 00:20:23.642 { 00:20:23.642 "method": "bdev_set_options", 00:20:23.642 "params": { 00:20:23.642 "bdev_io_pool_size": 65535, 00:20:23.642 "bdev_io_cache_size": 256, 00:20:23.642 "bdev_auto_examine": true, 00:20:23.642 "iobuf_small_cache_size": 128, 00:20:23.642 "iobuf_large_cache_size": 16 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "bdev_raid_set_options", 00:20:23.642 "params": { 00:20:23.642 "process_window_size_kb": 1024 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "bdev_iscsi_set_options", 00:20:23.642 "params": { 00:20:23.642 "timeout_sec": 30 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "bdev_nvme_set_options", 00:20:23.642 "params": { 00:20:23.642 "action_on_timeout": "none", 00:20:23.642 "timeout_us": 0, 00:20:23.642 "timeout_admin_us": 0, 00:20:23.642 "keep_alive_timeout_ms": 10000, 00:20:23.642 "arbitration_burst": 0, 00:20:23.642 "low_priority_weight": 0, 00:20:23.642 "medium_priority_weight": 0, 00:20:23.642 "high_priority_weight": 0, 00:20:23.642 "nvme_adminq_poll_period_us": 10000, 00:20:23.642 "nvme_ioq_poll_period_us": 0, 00:20:23.642 "io_queue_requests": 0, 00:20:23.642 "delay_cmd_submit": true, 00:20:23.642 "transport_retry_count": 4, 00:20:23.642 "bdev_retry_count": 3, 00:20:23.642 "transport_ack_timeout": 0, 00:20:23.642 "ctrlr_loss_timeout_sec": 0, 00:20:23.642 "reconnect_delay_sec": 0, 00:20:23.642 "fast_io_fail_timeout_sec": 0, 00:20:23.642 "disable_auto_failback": false, 00:20:23.642 "generate_uuids": false, 00:20:23.642 "transport_tos": 0, 00:20:23.642 "nvme_error_stat": false, 00:20:23.642 "rdma_srq_size": 0, 00:20:23.642 "io_path_stat": false, 00:20:23.642 "allow_accel_sequence": false, 00:20:23.642 "rdma_max_cq_size": 0, 00:20:23.642 "rdma_cm_event_timeout_ms": 0, 00:20:23.642 "dhchap_digests": [ 00:20:23.642 "sha256", 00:20:23.642 "sha384", 00:20:23.642 "sha512" 00:20:23.642 ], 00:20:23.642 "dhchap_dhgroups": [ 00:20:23.642 "null", 00:20:23.642 "ffdhe2048", 00:20:23.642 "ffdhe3072", 00:20:23.642 "ffdhe4096", 00:20:23.642 "ffdhe6144", 00:20:23.642 "ffdhe8192" 00:20:23.642 ] 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "bdev_nvme_set_hotplug", 00:20:23.642 "params": { 00:20:23.642 "period_us": 100000, 00:20:23.642 "enable": false 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "bdev_malloc_create", 00:20:23.642 "params": { 00:20:23.642 "name": "malloc0", 00:20:23.642 "num_blocks": 8192, 00:20:23.642 "block_size": 4096, 00:20:23.642 "physical_block_size": 4096, 00:20:23.642 "uuid": "ab445ab8-084e-4a1a-95ba-bcf03cc4162a", 00:20:23.642 "optimal_io_boundary": 0 00:20:23.642 } 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "method": "bdev_wait_for_examine" 00:20:23.642 } 00:20:23.642 ] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "nbd", 00:20:23.642 "config": [] 00:20:23.642 }, 00:20:23.642 { 00:20:23.642 "subsystem": "scheduler", 00:20:23.642 "config": [ 00:20:23.642 { 00:20:23.642 "method": "framework_set_scheduler", 00:20:23.643 "params": { 00:20:23.643 "name": "static" 00:20:23.643 } 00:20:23.643 } 00:20:23.643 ] 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "subsystem": "nvmf", 00:20:23.643 "config": [ 00:20:23.643 { 00:20:23.643 "method": "nvmf_set_config", 00:20:23.643 "params": { 00:20:23.643 "discovery_filter": "match_any", 00:20:23.643 "admin_cmd_passthru": { 00:20:23.643 "identify_ctrlr": false 00:20:23.643 } 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_set_max_subsystems", 00:20:23.643 "params": { 00:20:23.643 "max_subsystems": 1024 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_set_crdt", 00:20:23.643 "params": { 00:20:23.643 "crdt1": 0, 00:20:23.643 "crdt2": 0, 00:20:23.643 "crdt3": 0 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_create_transport", 00:20:23.643 "params": { 00:20:23.643 "trtype": "TCP", 00:20:23.643 "max_queue_depth": 128, 00:20:23.643 "max_io_qpairs_per_ctrlr": 127, 00:20:23.643 "in_capsule_data_size": 4096, 00:20:23.643 "max_io_size": 131072, 00:20:23.643 "io_unit_size": 131072, 00:20:23.643 "max_aq_depth": 128, 00:20:23.643 "num_shared_buffers": 511, 00:20:23.643 "buf_cache_size": 4294967295, 00:20:23.643 "dif_insert_or_strip": false, 00:20:23.643 "zcopy": false, 00:20:23.643 "c2h_success": false, 00:20:23.643 "sock_priority": 0, 00:20:23.643 "abort_timeout_sec": 1, 00:20:23.643 "ack_timeout": 0, 00:20:23.643 "data_wr_pool_size": 0 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_create_subsystem", 00:20:23.643 "params": { 00:20:23.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.643 "allow_any_host": false, 00:20:23.643 "serial_number": "SPDK00000000000001", 00:20:23.643 "model_number": "SPDK bdev Controller", 00:20:23.643 "max_namespaces": 10, 00:20:23.643 "min_cntlid": 1, 00:20:23.643 "max_cntlid": 65519, 00:20:23.643 "ana_reporting": false 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_subsystem_add_host", 00:20:23.643 "params": { 00:20:23.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.643 "host": "nqn.2016-06.io.spdk:host1", 00:20:23.643 "psk": "/tmp/tmp.LPIwG7wSST" 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_subsystem_add_ns", 00:20:23.643 "params": { 00:20:23.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.643 "namespace": { 00:20:23.643 "nsid": 1, 00:20:23.643 "bdev_name": "malloc0", 00:20:23.643 "nguid": "AB445AB8084E4A1A95BABCF03CC4162A", 00:20:23.643 "uuid": "ab445ab8-084e-4a1a-95ba-bcf03cc4162a", 00:20:23.643 "no_auto_visible": false 00:20:23.643 } 00:20:23.643 } 00:20:23.643 }, 00:20:23.643 { 00:20:23.643 "method": "nvmf_subsystem_add_listener", 00:20:23.643 "params": { 00:20:23.643 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.643 "listen_address": { 00:20:23.643 "trtype": "TCP", 00:20:23.643 "adrfam": "IPv4", 00:20:23.643 "traddr": "10.0.0.2", 00:20:23.643 "trsvcid": "4420" 00:20:23.643 }, 00:20:23.643 "secure_channel": true 00:20:23.643 } 00:20:23.643 } 00:20:23.643 ] 00:20:23.643 } 00:20:23.643 ] 00:20:23.643 }' 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4140615 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4140615 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4140615 ']' 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:23.643 01:22:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.643 [2024-05-15 01:22:59.259205] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:23.643 [2024-05-15 01:22:59.259255] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.643 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.643 [2024-05-15 01:22:59.333104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.901 [2024-05-15 01:22:59.404859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.901 [2024-05-15 01:22:59.404894] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.901 [2024-05-15 01:22:59.404903] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.901 [2024-05-15 01:22:59.404912] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.901 [2024-05-15 01:22:59.404919] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.901 [2024-05-15 01:22:59.404981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.159 [2024-05-15 01:22:59.599646] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.159 [2024-05-15 01:22:59.615615] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:24.159 [2024-05-15 01:22:59.631644] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:24.159 [2024-05-15 01:22:59.631688] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.159 [2024-05-15 01:22:59.640609] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.417 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:24.417 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:24.417 01:23:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.417 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:24.417 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=4140700 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 4140700 /var/tmp/bdevperf.sock 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4140700 ']' 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:24.675 01:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:20:24.675 "subsystems": [ 00:20:24.675 { 00:20:24.675 "subsystem": "keyring", 00:20:24.675 "config": [] 00:20:24.675 }, 00:20:24.675 { 00:20:24.675 "subsystem": "iobuf", 00:20:24.675 "config": [ 00:20:24.675 { 00:20:24.675 "method": "iobuf_set_options", 00:20:24.675 "params": { 00:20:24.675 "small_pool_count": 8192, 00:20:24.675 "large_pool_count": 1024, 00:20:24.675 "small_bufsize": 8192, 00:20:24.675 "large_bufsize": 135168 00:20:24.675 } 00:20:24.675 } 00:20:24.675 ] 00:20:24.675 }, 00:20:24.675 { 00:20:24.675 "subsystem": "sock", 00:20:24.675 "config": [ 00:20:24.675 { 00:20:24.675 "method": "sock_impl_set_options", 00:20:24.675 "params": { 00:20:24.675 "impl_name": "posix", 00:20:24.675 "recv_buf_size": 2097152, 00:20:24.675 "send_buf_size": 2097152, 00:20:24.675 "enable_recv_pipe": true, 00:20:24.675 "enable_quickack": false, 00:20:24.675 "enable_placement_id": 0, 00:20:24.675 "enable_zerocopy_send_server": true, 00:20:24.675 "enable_zerocopy_send_client": false, 00:20:24.675 "zerocopy_threshold": 0, 00:20:24.675 "tls_version": 0, 00:20:24.676 "enable_ktls": false 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "sock_impl_set_options", 00:20:24.676 "params": { 00:20:24.676 "impl_name": "ssl", 00:20:24.676 "recv_buf_size": 4096, 00:20:24.676 "send_buf_size": 4096, 00:20:24.676 "enable_recv_pipe": true, 00:20:24.676 "enable_quickack": false, 00:20:24.676 "enable_placement_id": 0, 00:20:24.676 "enable_zerocopy_send_server": true, 00:20:24.676 "enable_zerocopy_send_client": false, 00:20:24.676 "zerocopy_threshold": 0, 00:20:24.676 "tls_version": 0, 00:20:24.676 "enable_ktls": false 00:20:24.676 } 00:20:24.676 } 00:20:24.676 ] 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "subsystem": "vmd", 00:20:24.676 "config": [] 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "subsystem": "accel", 00:20:24.676 "config": [ 00:20:24.676 { 00:20:24.676 "method": "accel_set_options", 00:20:24.676 "params": { 00:20:24.676 "small_cache_size": 128, 00:20:24.676 "large_cache_size": 16, 00:20:24.676 "task_count": 2048, 00:20:24.676 "sequence_count": 2048, 00:20:24.676 "buf_count": 2048 00:20:24.676 } 00:20:24.676 } 00:20:24.676 ] 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "subsystem": "bdev", 00:20:24.676 "config": [ 00:20:24.676 { 00:20:24.676 "method": "bdev_set_options", 00:20:24.676 "params": { 00:20:24.676 "bdev_io_pool_size": 65535, 00:20:24.676 "bdev_io_cache_size": 256, 00:20:24.676 "bdev_auto_examine": true, 00:20:24.676 "iobuf_small_cache_size": 128, 00:20:24.676 "iobuf_large_cache_size": 16 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "bdev_raid_set_options", 00:20:24.676 "params": { 00:20:24.676 "process_window_size_kb": 1024 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "bdev_iscsi_set_options", 00:20:24.676 "params": { 00:20:24.676 "timeout_sec": 30 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "bdev_nvme_set_options", 00:20:24.676 "params": { 00:20:24.676 "action_on_timeout": "none", 00:20:24.676 "timeout_us": 0, 00:20:24.676 "timeout_admin_us": 0, 00:20:24.676 "keep_alive_timeout_ms": 10000, 00:20:24.676 "arbitration_burst": 0, 00:20:24.676 "low_priority_weight": 0, 00:20:24.676 "medium_priority_weight": 0, 00:20:24.676 "high_priority_weight": 0, 00:20:24.676 "nvme_adminq_poll_period_us": 10000, 00:20:24.676 "nvme_ioq_poll_period_us": 0, 00:20:24.676 "io_queue_requests": 512, 00:20:24.676 "delay_cmd_submit": true, 00:20:24.676 "transport_retry_count": 4, 00:20:24.676 "bdev_retry_count": 3, 00:20:24.676 "transport_ack_timeout": 0, 00:20:24.676 "ctrlr_loss_timeout_sec": 0, 00:20:24.676 "reconnect_delay_sec": 0, 00:20:24.676 "fast_io_fail_timeout_sec": 0, 00:20:24.676 "disable_auto_failback": false, 00:20:24.676 "generate_uuids": false, 00:20:24.676 "transport_tos": 0, 00:20:24.676 "nvme_error_stat": false, 00:20:24.676 "rdma_srq_size": 0, 00:20:24.676 "io_path_stat": false, 00:20:24.676 "allow_accel_sequence": false, 00:20:24.676 "rdma_max_cq_size": 0, 00:20:24.676 "rdma_cm_event_timeout_ms": 0, 00:20:24.676 "dhchap_digests": [ 00:20:24.676 "sha256", 00:20:24.676 "sha384", 00:20:24.676 "sha512" 00:20:24.676 ], 00:20:24.676 "dhchap_dhgroups": [ 00:20:24.676 "null", 00:20:24.676 "ffdhe2048", 00:20:24.676 "ffdhe3072", 00:20:24.676 "ffdhe4096", 00:20:24.676 "ffdhe6144", 00:20:24.676 "ffdhe8192" 00:20:24.676 ] 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "bdev_nvme_attach_controller", 00:20:24.676 "params": { 00:20:24.676 "name": "TLSTEST", 00:20:24.676 "trtype": "TCP", 00:20:24.676 "adrfam": "IPv4", 00:20:24.676 "traddr": "10.0.0.2", 00:20:24.676 "trsvcid": "4420", 00:20:24.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.676 "prchk_reftag": false, 00:20:24.676 "prchk_guard": false, 00:20:24.676 "ctrlr_loss_timeout_sec": 0, 00:20:24.676 "reconnect_delay_sec": 0, 00:20:24.676 "fast_io_fail_timeout_sec": 0, 00:20:24.676 "psk": "/tmp/tmp.LPIwG7wSST", 00:20:24.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.676 "hdgst": false, 00:20:24.676 "ddgst": false 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "bdev_nvme_set_hotplug", 00:20:24.676 "params": { 00:20:24.676 "period_us": 100000, 00:20:24.676 "enable": false 00:20:24.676 } 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "method": "bdev_wait_for_examine" 00:20:24.676 } 00:20:24.676 ] 00:20:24.676 }, 00:20:24.676 { 00:20:24.676 "subsystem": "nbd", 00:20:24.676 "config": [] 00:20:24.676 } 00:20:24.676 ] 00:20:24.676 }' 00:20:24.676 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.676 [2024-05-15 01:23:00.156277] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:24.676 [2024-05-15 01:23:00.156330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4140700 ] 00:20:24.676 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.676 [2024-05-15 01:23:00.223362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.676 [2024-05-15 01:23:00.292715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.934 [2024-05-15 01:23:00.427198] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:24.935 [2024-05-15 01:23:00.427287] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.500 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:25.500 01:23:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:25.500 01:23:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.500 Running I/O for 10 seconds... 00:20:35.470 00:20:35.470 Latency(us) 00:20:35.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.470 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:35.470 Verification LBA range: start 0x0 length 0x2000 00:20:35.470 TLSTESTn1 : 10.06 2025.72 7.91 0.00 0.00 63032.41 6920.60 99824.44 00:20:35.470 =================================================================================================================== 00:20:35.470 Total : 2025.72 7.91 0.00 0.00 63032.41 6920.60 99824.44 00:20:35.470 0 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 4140700 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4140700 ']' 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4140700 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.470 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4140700 00:20:35.727 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:20:35.727 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:20:35.727 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4140700' 00:20:35.727 killing process with pid 4140700 00:20:35.727 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4140700 00:20:35.727 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.727 00:20:35.727 Latency(us) 00:20:35.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.727 =================================================================================================================== 00:20:35.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:35.727 [2024-05-15 01:23:11.194815] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:35.727 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4140700 00:20:35.728 01:23:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 4140615 00:20:35.728 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4140615 ']' 00:20:35.728 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4140615 00:20:35.728 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:35.728 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:35.728 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4140615 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4140615' 00:20:35.986 killing process with pid 4140615 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4140615 00:20:35.986 [2024-05-15 01:23:11.454606] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:35.986 [2024-05-15 01:23:11.454643] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4140615 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4142661 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4142661 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4142661 ']' 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:35.986 01:23:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.245 [2024-05-15 01:23:11.725953] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:36.245 [2024-05-15 01:23:11.726003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.245 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.245 [2024-05-15 01:23:11.799148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.245 [2024-05-15 01:23:11.864767] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.245 [2024-05-15 01:23:11.864808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.245 [2024-05-15 01:23:11.864817] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.245 [2024-05-15 01:23:11.864825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.245 [2024-05-15 01:23:11.864831] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.245 [2024-05-15 01:23:11.864861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.LPIwG7wSST 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.LPIwG7wSST 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:37.178 [2024-05-15 01:23:12.707822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.178 01:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:37.436 01:23:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:37.436 [2024-05-15 01:23:13.028616] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:37.437 [2024-05-15 01:23:13.028687] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.437 [2024-05-15 01:23:13.028888] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.437 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:37.695 malloc0 00:20:37.695 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.LPIwG7wSST 00:20:37.952 [2024-05-15 01:23:13.542381] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=4143082 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 4143082 /var/tmp/bdevperf.sock 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4143082 ']' 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:37.952 01:23:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.952 [2024-05-15 01:23:13.598128] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:37.952 [2024-05-15 01:23:13.598179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143082 ] 00:20:37.952 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.209 [2024-05-15 01:23:13.668399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.209 [2024-05-15 01:23:13.743017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.773 01:23:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:38.773 01:23:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:38.773 01:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LPIwG7wSST 00:20:39.030 01:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:39.288 [2024-05-15 01:23:14.730543] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.288 nvme0n1 00:20:39.288 01:23:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:39.288 Running I/O for 1 seconds... 00:20:40.660 00:20:40.660 Latency(us) 00:20:40.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.660 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:40.660 Verification LBA range: start 0x0 length 0x2000 00:20:40.660 nvme0n1 : 1.06 1747.54 6.83 0.00 0.00 71655.68 6501.17 108213.04 00:20:40.660 =================================================================================================================== 00:20:40.660 Total : 1747.54 6.83 0.00 0.00 71655.68 6501.17 108213.04 00:20:40.660 0 00:20:40.660 01:23:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 4143082 00:20:40.660 01:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4143082 ']' 00:20:40.660 01:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4143082 00:20:40.660 01:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:40.660 01:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:40.660 01:23:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4143082 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4143082' 00:20:40.660 killing process with pid 4143082 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4143082 00:20:40.660 Received shutdown signal, test time was about 1.000000 seconds 00:20:40.660 00:20:40.660 Latency(us) 00:20:40.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.660 =================================================================================================================== 00:20:40.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4143082 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 4142661 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4142661 ']' 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4142661 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4142661 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4142661' 00:20:40.660 killing process with pid 4142661 00:20:40.660 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4142661 00:20:40.661 [2024-05-15 01:23:16.287604] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:40.661 [2024-05-15 01:23:16.287644] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:40.661 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4142661 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4143529 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4143529 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4143529 ']' 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:40.919 01:23:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.919 [2024-05-15 01:23:16.554496] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:40.919 [2024-05-15 01:23:16.554547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.919 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.176 [2024-05-15 01:23:16.628051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.176 [2024-05-15 01:23:16.699728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.176 [2024-05-15 01:23:16.699767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.176 [2024-05-15 01:23:16.699776] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.176 [2024-05-15 01:23:16.699785] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.176 [2024-05-15 01:23:16.699793] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.176 [2024-05-15 01:23:16.699813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.741 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.741 [2024-05-15 01:23:17.405189] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.741 malloc0 00:20:41.999 [2024-05-15 01:23:17.433607] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:41.999 [2024-05-15 01:23:17.433666] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.999 [2024-05-15 01:23:17.433864] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=4143693 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 4143693 /var/tmp/bdevperf.sock 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4143693 ']' 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:41.999 01:23:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.999 [2024-05-15 01:23:17.503282] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:41.999 [2024-05-15 01:23:17.503327] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4143693 ] 00:20:41.999 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.999 [2024-05-15 01:23:17.571502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.999 [2024-05-15 01:23:17.640941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.931 01:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:42.931 01:23:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:42.931 01:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.LPIwG7wSST 00:20:42.931 01:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:43.188 [2024-05-15 01:23:18.644505] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.189 nvme0n1 00:20:43.189 01:23:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:43.189 Running I/O for 1 seconds... 00:20:44.560 00:20:44.560 Latency(us) 00:20:44.560 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.560 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:44.560 Verification LBA range: start 0x0 length 0x2000 00:20:44.560 nvme0n1 : 1.06 1669.65 6.52 0.00 0.00 75024.59 7025.46 109890.76 00:20:44.560 =================================================================================================================== 00:20:44.560 Total : 1669.65 6.52 0.00 0.00 75024.59 7025.46 109890.76 00:20:44.560 0 00:20:44.560 01:23:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:44.560 01:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.560 01:23:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.560 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.560 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:20:44.560 "subsystems": [ 00:20:44.560 { 00:20:44.560 "subsystem": "keyring", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "keyring_file_add_key", 00:20:44.560 "params": { 00:20:44.560 "name": "key0", 00:20:44.560 "path": "/tmp/tmp.LPIwG7wSST" 00:20:44.560 } 00:20:44.560 } 00:20:44.560 ] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "iobuf", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "iobuf_set_options", 00:20:44.560 "params": { 00:20:44.560 "small_pool_count": 8192, 00:20:44.560 "large_pool_count": 1024, 00:20:44.560 "small_bufsize": 8192, 00:20:44.560 "large_bufsize": 135168 00:20:44.560 } 00:20:44.560 } 00:20:44.560 ] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "sock", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "sock_impl_set_options", 00:20:44.560 "params": { 00:20:44.560 "impl_name": "posix", 00:20:44.560 "recv_buf_size": 2097152, 00:20:44.560 "send_buf_size": 2097152, 00:20:44.560 "enable_recv_pipe": true, 00:20:44.560 "enable_quickack": false, 00:20:44.560 "enable_placement_id": 0, 00:20:44.560 "enable_zerocopy_send_server": true, 00:20:44.560 "enable_zerocopy_send_client": false, 00:20:44.560 "zerocopy_threshold": 0, 00:20:44.560 "tls_version": 0, 00:20:44.560 "enable_ktls": false 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "sock_impl_set_options", 00:20:44.560 "params": { 00:20:44.560 "impl_name": "ssl", 00:20:44.560 "recv_buf_size": 4096, 00:20:44.560 "send_buf_size": 4096, 00:20:44.560 "enable_recv_pipe": true, 00:20:44.560 "enable_quickack": false, 00:20:44.560 "enable_placement_id": 0, 00:20:44.560 "enable_zerocopy_send_server": true, 00:20:44.560 "enable_zerocopy_send_client": false, 00:20:44.560 "zerocopy_threshold": 0, 00:20:44.560 "tls_version": 0, 00:20:44.560 "enable_ktls": false 00:20:44.560 } 00:20:44.560 } 00:20:44.560 ] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "vmd", 00:20:44.560 "config": [] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "accel", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "accel_set_options", 00:20:44.560 "params": { 00:20:44.560 "small_cache_size": 128, 00:20:44.560 "large_cache_size": 16, 00:20:44.560 "task_count": 2048, 00:20:44.560 "sequence_count": 2048, 00:20:44.560 "buf_count": 2048 00:20:44.560 } 00:20:44.560 } 00:20:44.560 ] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "bdev", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "bdev_set_options", 00:20:44.560 "params": { 00:20:44.560 "bdev_io_pool_size": 65535, 00:20:44.560 "bdev_io_cache_size": 256, 00:20:44.560 "bdev_auto_examine": true, 00:20:44.560 "iobuf_small_cache_size": 128, 00:20:44.560 "iobuf_large_cache_size": 16 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "bdev_raid_set_options", 00:20:44.560 "params": { 00:20:44.560 "process_window_size_kb": 1024 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "bdev_iscsi_set_options", 00:20:44.560 "params": { 00:20:44.560 "timeout_sec": 30 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "bdev_nvme_set_options", 00:20:44.560 "params": { 00:20:44.560 "action_on_timeout": "none", 00:20:44.560 "timeout_us": 0, 00:20:44.560 "timeout_admin_us": 0, 00:20:44.560 "keep_alive_timeout_ms": 10000, 00:20:44.560 "arbitration_burst": 0, 00:20:44.560 "low_priority_weight": 0, 00:20:44.560 "medium_priority_weight": 0, 00:20:44.560 "high_priority_weight": 0, 00:20:44.560 "nvme_adminq_poll_period_us": 10000, 00:20:44.560 "nvme_ioq_poll_period_us": 0, 00:20:44.560 "io_queue_requests": 0, 00:20:44.560 "delay_cmd_submit": true, 00:20:44.560 "transport_retry_count": 4, 00:20:44.560 "bdev_retry_count": 3, 00:20:44.560 "transport_ack_timeout": 0, 00:20:44.560 "ctrlr_loss_timeout_sec": 0, 00:20:44.560 "reconnect_delay_sec": 0, 00:20:44.560 "fast_io_fail_timeout_sec": 0, 00:20:44.560 "disable_auto_failback": false, 00:20:44.560 "generate_uuids": false, 00:20:44.560 "transport_tos": 0, 00:20:44.560 "nvme_error_stat": false, 00:20:44.560 "rdma_srq_size": 0, 00:20:44.560 "io_path_stat": false, 00:20:44.560 "allow_accel_sequence": false, 00:20:44.560 "rdma_max_cq_size": 0, 00:20:44.560 "rdma_cm_event_timeout_ms": 0, 00:20:44.560 "dhchap_digests": [ 00:20:44.560 "sha256", 00:20:44.560 "sha384", 00:20:44.560 "sha512" 00:20:44.560 ], 00:20:44.560 "dhchap_dhgroups": [ 00:20:44.560 "null", 00:20:44.560 "ffdhe2048", 00:20:44.560 "ffdhe3072", 00:20:44.560 "ffdhe4096", 00:20:44.560 "ffdhe6144", 00:20:44.560 "ffdhe8192" 00:20:44.560 ] 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "bdev_nvme_set_hotplug", 00:20:44.560 "params": { 00:20:44.560 "period_us": 100000, 00:20:44.560 "enable": false 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "bdev_malloc_create", 00:20:44.560 "params": { 00:20:44.560 "name": "malloc0", 00:20:44.560 "num_blocks": 8192, 00:20:44.560 "block_size": 4096, 00:20:44.560 "physical_block_size": 4096, 00:20:44.560 "uuid": "23648965-b5bc-4f57-a0af-8f756b9b6f1c", 00:20:44.560 "optimal_io_boundary": 0 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "bdev_wait_for_examine" 00:20:44.560 } 00:20:44.560 ] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "nbd", 00:20:44.560 "config": [] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "scheduler", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "framework_set_scheduler", 00:20:44.560 "params": { 00:20:44.560 "name": "static" 00:20:44.560 } 00:20:44.560 } 00:20:44.560 ] 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "subsystem": "nvmf", 00:20:44.560 "config": [ 00:20:44.560 { 00:20:44.560 "method": "nvmf_set_config", 00:20:44.560 "params": { 00:20:44.560 "discovery_filter": "match_any", 00:20:44.560 "admin_cmd_passthru": { 00:20:44.560 "identify_ctrlr": false 00:20:44.560 } 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "nvmf_set_max_subsystems", 00:20:44.560 "params": { 00:20:44.560 "max_subsystems": 1024 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "nvmf_set_crdt", 00:20:44.560 "params": { 00:20:44.560 "crdt1": 0, 00:20:44.560 "crdt2": 0, 00:20:44.560 "crdt3": 0 00:20:44.560 } 00:20:44.560 }, 00:20:44.560 { 00:20:44.560 "method": "nvmf_create_transport", 00:20:44.560 "params": { 00:20:44.560 "trtype": "TCP", 00:20:44.560 "max_queue_depth": 128, 00:20:44.560 "max_io_qpairs_per_ctrlr": 127, 00:20:44.560 "in_capsule_data_size": 4096, 00:20:44.561 "max_io_size": 131072, 00:20:44.561 "io_unit_size": 131072, 00:20:44.561 "max_aq_depth": 128, 00:20:44.561 "num_shared_buffers": 511, 00:20:44.561 "buf_cache_size": 4294967295, 00:20:44.561 "dif_insert_or_strip": false, 00:20:44.561 "zcopy": false, 00:20:44.561 "c2h_success": false, 00:20:44.561 "sock_priority": 0, 00:20:44.561 "abort_timeout_sec": 1, 00:20:44.561 "ack_timeout": 0, 00:20:44.561 "data_wr_pool_size": 0 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "nvmf_create_subsystem", 00:20:44.561 "params": { 00:20:44.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.561 "allow_any_host": false, 00:20:44.561 "serial_number": "00000000000000000000", 00:20:44.561 "model_number": "SPDK bdev Controller", 00:20:44.561 "max_namespaces": 32, 00:20:44.561 "min_cntlid": 1, 00:20:44.561 "max_cntlid": 65519, 00:20:44.561 "ana_reporting": false 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "nvmf_subsystem_add_host", 00:20:44.561 "params": { 00:20:44.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.561 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.561 "psk": "key0" 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "nvmf_subsystem_add_ns", 00:20:44.561 "params": { 00:20:44.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.561 "namespace": { 00:20:44.561 "nsid": 1, 00:20:44.561 "bdev_name": "malloc0", 00:20:44.561 "nguid": "23648965B5BC4F57A0AF8F756B9B6F1C", 00:20:44.561 "uuid": "23648965-b5bc-4f57-a0af-8f756b9b6f1c", 00:20:44.561 "no_auto_visible": false 00:20:44.561 } 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "nvmf_subsystem_add_listener", 00:20:44.561 "params": { 00:20:44.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.561 "listen_address": { 00:20:44.561 "trtype": "TCP", 00:20:44.561 "adrfam": "IPv4", 00:20:44.561 "traddr": "10.0.0.2", 00:20:44.561 "trsvcid": "4420" 00:20:44.561 }, 00:20:44.561 "secure_channel": true 00:20:44.561 } 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 }' 00:20:44.561 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:44.561 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:20:44.561 "subsystems": [ 00:20:44.561 { 00:20:44.561 "subsystem": "keyring", 00:20:44.561 "config": [ 00:20:44.561 { 00:20:44.561 "method": "keyring_file_add_key", 00:20:44.561 "params": { 00:20:44.561 "name": "key0", 00:20:44.561 "path": "/tmp/tmp.LPIwG7wSST" 00:20:44.561 } 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "subsystem": "iobuf", 00:20:44.561 "config": [ 00:20:44.561 { 00:20:44.561 "method": "iobuf_set_options", 00:20:44.561 "params": { 00:20:44.561 "small_pool_count": 8192, 00:20:44.561 "large_pool_count": 1024, 00:20:44.561 "small_bufsize": 8192, 00:20:44.561 "large_bufsize": 135168 00:20:44.561 } 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "subsystem": "sock", 00:20:44.561 "config": [ 00:20:44.561 { 00:20:44.561 "method": "sock_impl_set_options", 00:20:44.561 "params": { 00:20:44.561 "impl_name": "posix", 00:20:44.561 "recv_buf_size": 2097152, 00:20:44.561 "send_buf_size": 2097152, 00:20:44.561 "enable_recv_pipe": true, 00:20:44.561 "enable_quickack": false, 00:20:44.561 "enable_placement_id": 0, 00:20:44.561 "enable_zerocopy_send_server": true, 00:20:44.561 "enable_zerocopy_send_client": false, 00:20:44.561 "zerocopy_threshold": 0, 00:20:44.561 "tls_version": 0, 00:20:44.561 "enable_ktls": false 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "sock_impl_set_options", 00:20:44.561 "params": { 00:20:44.561 "impl_name": "ssl", 00:20:44.561 "recv_buf_size": 4096, 00:20:44.561 "send_buf_size": 4096, 00:20:44.561 "enable_recv_pipe": true, 00:20:44.561 "enable_quickack": false, 00:20:44.561 "enable_placement_id": 0, 00:20:44.561 "enable_zerocopy_send_server": true, 00:20:44.561 "enable_zerocopy_send_client": false, 00:20:44.561 "zerocopy_threshold": 0, 00:20:44.561 "tls_version": 0, 00:20:44.561 "enable_ktls": false 00:20:44.561 } 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "subsystem": "vmd", 00:20:44.561 "config": [] 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "subsystem": "accel", 00:20:44.561 "config": [ 00:20:44.561 { 00:20:44.561 "method": "accel_set_options", 00:20:44.561 "params": { 00:20:44.561 "small_cache_size": 128, 00:20:44.561 "large_cache_size": 16, 00:20:44.561 "task_count": 2048, 00:20:44.561 "sequence_count": 2048, 00:20:44.561 "buf_count": 2048 00:20:44.561 } 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "subsystem": "bdev", 00:20:44.561 "config": [ 00:20:44.561 { 00:20:44.561 "method": "bdev_set_options", 00:20:44.561 "params": { 00:20:44.561 "bdev_io_pool_size": 65535, 00:20:44.561 "bdev_io_cache_size": 256, 00:20:44.561 "bdev_auto_examine": true, 00:20:44.561 "iobuf_small_cache_size": 128, 00:20:44.561 "iobuf_large_cache_size": 16 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_raid_set_options", 00:20:44.561 "params": { 00:20:44.561 "process_window_size_kb": 1024 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_iscsi_set_options", 00:20:44.561 "params": { 00:20:44.561 "timeout_sec": 30 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_nvme_set_options", 00:20:44.561 "params": { 00:20:44.561 "action_on_timeout": "none", 00:20:44.561 "timeout_us": 0, 00:20:44.561 "timeout_admin_us": 0, 00:20:44.561 "keep_alive_timeout_ms": 10000, 00:20:44.561 "arbitration_burst": 0, 00:20:44.561 "low_priority_weight": 0, 00:20:44.561 "medium_priority_weight": 0, 00:20:44.561 "high_priority_weight": 0, 00:20:44.561 "nvme_adminq_poll_period_us": 10000, 00:20:44.561 "nvme_ioq_poll_period_us": 0, 00:20:44.561 "io_queue_requests": 512, 00:20:44.561 "delay_cmd_submit": true, 00:20:44.561 "transport_retry_count": 4, 00:20:44.561 "bdev_retry_count": 3, 00:20:44.561 "transport_ack_timeout": 0, 00:20:44.561 "ctrlr_loss_timeout_sec": 0, 00:20:44.561 "reconnect_delay_sec": 0, 00:20:44.561 "fast_io_fail_timeout_sec": 0, 00:20:44.561 "disable_auto_failback": false, 00:20:44.561 "generate_uuids": false, 00:20:44.561 "transport_tos": 0, 00:20:44.561 "nvme_error_stat": false, 00:20:44.561 "rdma_srq_size": 0, 00:20:44.561 "io_path_stat": false, 00:20:44.561 "allow_accel_sequence": false, 00:20:44.561 "rdma_max_cq_size": 0, 00:20:44.561 "rdma_cm_event_timeout_ms": 0, 00:20:44.561 "dhchap_digests": [ 00:20:44.561 "sha256", 00:20:44.561 "sha384", 00:20:44.561 "sha512" 00:20:44.561 ], 00:20:44.561 "dhchap_dhgroups": [ 00:20:44.561 "null", 00:20:44.561 "ffdhe2048", 00:20:44.561 "ffdhe3072", 00:20:44.561 "ffdhe4096", 00:20:44.561 "ffdhe6144", 00:20:44.561 "ffdhe8192" 00:20:44.561 ] 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_nvme_attach_controller", 00:20:44.561 "params": { 00:20:44.561 "name": "nvme0", 00:20:44.561 "trtype": "TCP", 00:20:44.561 "adrfam": "IPv4", 00:20:44.561 "traddr": "10.0.0.2", 00:20:44.561 "trsvcid": "4420", 00:20:44.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.561 "prchk_reftag": false, 00:20:44.561 "prchk_guard": false, 00:20:44.561 "ctrlr_loss_timeout_sec": 0, 00:20:44.561 "reconnect_delay_sec": 0, 00:20:44.561 "fast_io_fail_timeout_sec": 0, 00:20:44.561 "psk": "key0", 00:20:44.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.561 "hdgst": false, 00:20:44.561 "ddgst": false 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_nvme_set_hotplug", 00:20:44.561 "params": { 00:20:44.561 "period_us": 100000, 00:20:44.561 "enable": false 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_enable_histogram", 00:20:44.561 "params": { 00:20:44.561 "name": "nvme0n1", 00:20:44.561 "enable": true 00:20:44.561 } 00:20:44.561 }, 00:20:44.561 { 00:20:44.561 "method": "bdev_wait_for_examine" 00:20:44.561 } 00:20:44.561 ] 00:20:44.561 }, 00:20:44.561 { 00:20:44.562 "subsystem": "nbd", 00:20:44.562 "config": [] 00:20:44.562 } 00:20:44.562 ] 00:20:44.562 }' 00:20:44.562 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 4143693 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4143693 ']' 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4143693 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4143693 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4143693' 00:20:44.821 killing process with pid 4143693 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4143693 00:20:44.821 Received shutdown signal, test time was about 1.000000 seconds 00:20:44.821 00:20:44.821 Latency(us) 00:20:44.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.821 =================================================================================================================== 00:20:44.821 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4143693 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 4143529 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4143529 ']' 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4143529 00:20:44.821 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4143529 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4143529' 00:20:45.131 killing process with pid 4143529 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4143529 00:20:45.131 [2024-05-15 01:23:20.566728] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4143529 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.131 01:23:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:20:45.131 "subsystems": [ 00:20:45.131 { 00:20:45.131 "subsystem": "keyring", 00:20:45.131 "config": [ 00:20:45.131 { 00:20:45.131 "method": "keyring_file_add_key", 00:20:45.131 "params": { 00:20:45.131 "name": "key0", 00:20:45.131 "path": "/tmp/tmp.LPIwG7wSST" 00:20:45.131 } 00:20:45.131 } 00:20:45.131 ] 00:20:45.131 }, 00:20:45.131 { 00:20:45.131 "subsystem": "iobuf", 00:20:45.131 "config": [ 00:20:45.131 { 00:20:45.131 "method": "iobuf_set_options", 00:20:45.131 "params": { 00:20:45.131 "small_pool_count": 8192, 00:20:45.131 "large_pool_count": 1024, 00:20:45.131 "small_bufsize": 8192, 00:20:45.131 "large_bufsize": 135168 00:20:45.131 } 00:20:45.131 } 00:20:45.131 ] 00:20:45.131 }, 00:20:45.131 { 00:20:45.131 "subsystem": "sock", 00:20:45.131 "config": [ 00:20:45.131 { 00:20:45.131 "method": "sock_impl_set_options", 00:20:45.131 "params": { 00:20:45.131 "impl_name": "posix", 00:20:45.131 "recv_buf_size": 2097152, 00:20:45.131 "send_buf_size": 2097152, 00:20:45.131 "enable_recv_pipe": true, 00:20:45.131 "enable_quickack": false, 00:20:45.131 "enable_placement_id": 0, 00:20:45.131 "enable_zerocopy_send_server": true, 00:20:45.131 "enable_zerocopy_send_client": false, 00:20:45.131 "zerocopy_threshold": 0, 00:20:45.131 "tls_version": 0, 00:20:45.131 "enable_ktls": false 00:20:45.131 } 00:20:45.131 }, 00:20:45.131 { 00:20:45.131 "method": "sock_impl_set_options", 00:20:45.131 "params": { 00:20:45.131 "impl_name": "ssl", 00:20:45.131 "recv_buf_size": 4096, 00:20:45.131 "send_buf_size": 4096, 00:20:45.131 "enable_recv_pipe": true, 00:20:45.131 "enable_quickack": false, 00:20:45.131 "enable_placement_id": 0, 00:20:45.131 "enable_zerocopy_send_server": true, 00:20:45.131 "enable_zerocopy_send_client": false, 00:20:45.131 "zerocopy_threshold": 0, 00:20:45.131 "tls_version": 0, 00:20:45.131 "enable_ktls": false 00:20:45.131 } 00:20:45.131 } 00:20:45.131 ] 00:20:45.131 }, 00:20:45.131 { 00:20:45.131 "subsystem": "vmd", 00:20:45.131 "config": [] 00:20:45.131 }, 00:20:45.131 { 00:20:45.131 "subsystem": "accel", 00:20:45.131 "config": [ 00:20:45.131 { 00:20:45.131 "method": "accel_set_options", 00:20:45.131 "params": { 00:20:45.131 "small_cache_size": 128, 00:20:45.132 "large_cache_size": 16, 00:20:45.132 "task_count": 2048, 00:20:45.132 "sequence_count": 2048, 00:20:45.132 "buf_count": 2048 00:20:45.132 } 00:20:45.132 } 00:20:45.132 ] 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "subsystem": "bdev", 00:20:45.132 "config": [ 00:20:45.132 { 00:20:45.132 "method": "bdev_set_options", 00:20:45.132 "params": { 00:20:45.132 "bdev_io_pool_size": 65535, 00:20:45.132 "bdev_io_cache_size": 256, 00:20:45.132 "bdev_auto_examine": true, 00:20:45.132 "iobuf_small_cache_size": 128, 00:20:45.132 "iobuf_large_cache_size": 16 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "bdev_raid_set_options", 00:20:45.132 "params": { 00:20:45.132 "process_window_size_kb": 1024 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "bdev_iscsi_set_options", 00:20:45.132 "params": { 00:20:45.132 "timeout_sec": 30 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "bdev_nvme_set_options", 00:20:45.132 "params": { 00:20:45.132 "action_on_timeout": "none", 00:20:45.132 "timeout_us": 0, 00:20:45.132 "timeout_admin_us": 0, 00:20:45.132 "keep_alive_timeout_ms": 10000, 00:20:45.132 "arbitration_burst": 0, 00:20:45.132 "low_priority_weight": 0, 00:20:45.132 "medium_priority_weight": 0, 00:20:45.132 "high_priority_weight": 0, 00:20:45.132 "nvme_adminq_poll_period_us": 10000, 00:20:45.132 "nvme_ioq_poll_period_us": 0, 00:20:45.132 "io_queue_requests": 0, 00:20:45.132 "delay_cmd_submit": true, 00:20:45.132 "transport_retry_count": 4, 00:20:45.132 "bdev_retry_count": 3, 00:20:45.132 "transport_ack_timeout": 0, 00:20:45.132 "ctrlr_loss_timeout_sec": 0, 00:20:45.132 "reconnect_delay_sec": 0, 00:20:45.132 "fast_io_fail_timeout_sec": 0, 00:20:45.132 "disable_auto_failback": false, 00:20:45.132 "generate_uuids": false, 00:20:45.132 "transport_tos": 0, 00:20:45.132 "nvme_error_stat": false, 00:20:45.132 "rdma_srq_size": 0, 00:20:45.132 "io_path_stat": false, 00:20:45.132 "allow_accel_sequence": false, 00:20:45.132 "rdma_max_cq_size": 0, 00:20:45.132 "rdma_cm_event_timeout_ms": 0, 00:20:45.132 "dhchap_digests": [ 00:20:45.132 "sha256", 00:20:45.132 "sha384", 00:20:45.132 "sha512" 00:20:45.132 ], 00:20:45.132 "dhchap_dhgroups": [ 00:20:45.132 "null", 00:20:45.132 "ffdhe2048", 00:20:45.132 "ffdhe3072", 00:20:45.132 "ffdhe4096", 00:20:45.132 "ffdhe6144", 00:20:45.132 "ffdhe8192" 00:20:45.132 ] 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "bdev_nvme_set_hotplug", 00:20:45.132 "params": { 00:20:45.132 "period_us": 100000, 00:20:45.132 "enable": false 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "bdev_malloc_create", 00:20:45.132 "params": { 00:20:45.132 "name": "malloc0", 00:20:45.132 "num_blocks": 8192, 00:20:45.132 "block_size": 4096, 00:20:45.132 "physical_block_size": 4096, 00:20:45.132 "uuid": "23648965-b5bc-4f57-a0af-8f756b9b6f1c", 00:20:45.132 "optimal_io_boundary": 0 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "bdev_wait_for_examine" 00:20:45.132 } 00:20:45.132 ] 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "subsystem": "nbd", 00:20:45.132 "config": [] 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "subsystem": "scheduler", 00:20:45.132 "config": [ 00:20:45.132 { 00:20:45.132 "method": "framework_set_scheduler", 00:20:45.132 "params": { 00:20:45.132 "name": "static" 00:20:45.132 } 00:20:45.132 } 00:20:45.132 ] 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "subsystem": "nvmf", 00:20:45.132 "config": [ 00:20:45.132 { 00:20:45.132 "method": "nvmf_set_config", 00:20:45.132 "params": { 00:20:45.132 "discovery_filter": "match_any", 00:20:45.132 "admin_cmd_passthru": { 00:20:45.132 "identify_ctrlr": false 00:20:45.132 } 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_set_max_subsystems", 00:20:45.132 "params": { 00:20:45.132 "max_subsystems": 1024 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_set_crdt", 00:20:45.132 "params": { 00:20:45.132 "crdt1": 0, 00:20:45.132 "crdt2": 0, 00:20:45.132 "crdt3": 0 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_create_transport", 00:20:45.132 "params": { 00:20:45.132 "trtype": "TCP", 00:20:45.132 "max_queue_depth": 128, 00:20:45.132 "max_io_qpairs_per_ctrlr": 127, 00:20:45.132 "in_capsule_data_size": 4096, 00:20:45.132 "max_io_size": 131072, 00:20:45.132 "io_unit_size": 131072, 00:20:45.132 "max_aq_depth": 128, 00:20:45.132 "num_shared_buffers": 511, 00:20:45.132 "buf_cache_size": 4294967295, 00:20:45.132 "dif_insert_or_strip": false, 00:20:45.132 "zcopy": false, 00:20:45.132 "c2h_success": false, 00:20:45.132 "sock_priority": 0, 00:20:45.132 "abort_timeout_sec": 1, 00:20:45.132 "ack_timeout": 0, 00:20:45.132 "data_wr_pool_size": 0 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_create_subsystem", 00:20:45.132 "params": { 00:20:45.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.132 "allow_any_host": false, 00:20:45.132 "serial_number": "00000000000000000000", 00:20:45.132 "model_n 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:45.132 umber": "SPDK bdev Controller", 00:20:45.132 "max_namespaces": 32, 00:20:45.132 "min_cntlid": 1, 00:20:45.132 "max_cntlid": 65519, 00:20:45.132 "ana_reporting": false 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_subsystem_add_host", 00:20:45.132 "params": { 00:20:45.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.132 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.132 "psk": "key0" 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_subsystem_add_ns", 00:20:45.132 "params": { 00:20:45.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.132 "namespace": { 00:20:45.132 "nsid": 1, 00:20:45.132 "bdev_name": "malloc0", 00:20:45.132 "nguid": "23648965B5BC4F57A0AF8F756B9B6F1C", 00:20:45.132 "uuid": "23648965-b5bc-4f57-a0af-8f756b9b6f1c", 00:20:45.132 "no_auto_visible": false 00:20:45.132 } 00:20:45.132 } 00:20:45.132 }, 00:20:45.132 { 00:20:45.132 "method": "nvmf_subsystem_add_listener", 00:20:45.132 "params": { 00:20:45.132 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.132 "listen_address": { 00:20:45.132 "trtype": "TCP", 00:20:45.132 "adrfam": "IPv4", 00:20:45.132 "traddr": "10.0.0.2", 00:20:45.132 "trsvcid": "4420" 00:20:45.132 }, 00:20:45.132 "secure_channel": true 00:20:45.132 } 00:20:45.132 } 00:20:45.132 ] 00:20:45.132 } 00:20:45.132 ] 00:20:45.132 }' 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=4144251 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 4144251 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4144251 ']' 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:45.132 01:23:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.390 [2024-05-15 01:23:20.833029] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:45.390 [2024-05-15 01:23:20.833081] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.390 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.390 [2024-05-15 01:23:20.907188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.390 [2024-05-15 01:23:20.972062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.390 [2024-05-15 01:23:20.972104] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.390 [2024-05-15 01:23:20.972113] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.390 [2024-05-15 01:23:20.972121] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.390 [2024-05-15 01:23:20.972128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.390 [2024-05-15 01:23:20.972213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.648 [2024-05-15 01:23:21.174852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.648 [2024-05-15 01:23:21.206859] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:45.648 [2024-05-15 01:23:21.206910] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.648 [2024-05-15 01:23:21.219550] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=4144526 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 4144526 /var/tmp/bdevperf.sock 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 4144526 ']' 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.214 01:23:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:20:46.214 "subsystems": [ 00:20:46.214 { 00:20:46.214 "subsystem": "keyring", 00:20:46.214 "config": [ 00:20:46.214 { 00:20:46.214 "method": "keyring_file_add_key", 00:20:46.214 "params": { 00:20:46.214 "name": "key0", 00:20:46.214 "path": "/tmp/tmp.LPIwG7wSST" 00:20:46.214 } 00:20:46.214 } 00:20:46.214 ] 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "subsystem": "iobuf", 00:20:46.214 "config": [ 00:20:46.214 { 00:20:46.214 "method": "iobuf_set_options", 00:20:46.214 "params": { 00:20:46.214 "small_pool_count": 8192, 00:20:46.214 "large_pool_count": 1024, 00:20:46.214 "small_bufsize": 8192, 00:20:46.214 "large_bufsize": 135168 00:20:46.214 } 00:20:46.214 } 00:20:46.214 ] 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "subsystem": "sock", 00:20:46.214 "config": [ 00:20:46.214 { 00:20:46.214 "method": "sock_impl_set_options", 00:20:46.214 "params": { 00:20:46.214 "impl_name": "posix", 00:20:46.214 "recv_buf_size": 2097152, 00:20:46.214 "send_buf_size": 2097152, 00:20:46.214 "enable_recv_pipe": true, 00:20:46.214 "enable_quickack": false, 00:20:46.214 "enable_placement_id": 0, 00:20:46.214 "enable_zerocopy_send_server": true, 00:20:46.214 "enable_zerocopy_send_client": false, 00:20:46.214 "zerocopy_threshold": 0, 00:20:46.214 "tls_version": 0, 00:20:46.214 "enable_ktls": false 00:20:46.214 } 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "method": "sock_impl_set_options", 00:20:46.214 "params": { 00:20:46.214 "impl_name": "ssl", 00:20:46.214 "recv_buf_size": 4096, 00:20:46.214 "send_buf_size": 4096, 00:20:46.214 "enable_recv_pipe": true, 00:20:46.214 "enable_quickack": false, 00:20:46.214 "enable_placement_id": 0, 00:20:46.214 "enable_zerocopy_send_server": true, 00:20:46.214 "enable_zerocopy_send_client": false, 00:20:46.214 "zerocopy_threshold": 0, 00:20:46.214 "tls_version": 0, 00:20:46.214 "enable_ktls": false 00:20:46.214 } 00:20:46.214 } 00:20:46.214 ] 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "subsystem": "vmd", 00:20:46.214 "config": [] 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "subsystem": "accel", 00:20:46.214 "config": [ 00:20:46.214 { 00:20:46.214 "method": "accel_set_options", 00:20:46.214 "params": { 00:20:46.214 "small_cache_size": 128, 00:20:46.214 "large_cache_size": 16, 00:20:46.214 "task_count": 2048, 00:20:46.214 "sequence_count": 2048, 00:20:46.214 "buf_count": 2048 00:20:46.214 } 00:20:46.214 } 00:20:46.214 ] 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "subsystem": "bdev", 00:20:46.214 "config": [ 00:20:46.214 { 00:20:46.214 "method": "bdev_set_options", 00:20:46.214 "params": { 00:20:46.214 "bdev_io_pool_size": 65535, 00:20:46.214 "bdev_io_cache_size": 256, 00:20:46.214 "bdev_auto_examine": true, 00:20:46.214 "iobuf_small_cache_size": 128, 00:20:46.214 "iobuf_large_cache_size": 16 00:20:46.214 } 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "method": "bdev_raid_set_options", 00:20:46.214 "params": { 00:20:46.214 "process_window_size_kb": 1024 00:20:46.214 } 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "method": "bdev_iscsi_set_options", 00:20:46.214 "params": { 00:20:46.214 "timeout_sec": 30 00:20:46.214 } 00:20:46.214 }, 00:20:46.214 { 00:20:46.214 "method": "bdev_nvme_set_options", 00:20:46.214 "params": { 00:20:46.214 "action_on_timeout": "none", 00:20:46.214 "timeout_us": 0, 00:20:46.214 "timeout_admin_us": 0, 00:20:46.214 "keep_alive_timeout_ms": 10000, 00:20:46.214 "arbitration_burst": 0, 00:20:46.214 "low_priority_weight": 0, 00:20:46.214 "medium_priority_weight": 0, 00:20:46.214 "high_priority_weight": 0, 00:20:46.214 "nvme_adminq_poll_period_us": 10000, 00:20:46.214 "nvme_ioq_poll_period_us": 0, 00:20:46.215 "io_queue_requests": 512, 00:20:46.215 "delay_cmd_submit": true, 00:20:46.215 "transport_retry_count": 4, 00:20:46.215 "bdev_retry_count": 3, 00:20:46.215 "transport_ack_timeout": 0, 00:20:46.215 "ctrlr_loss_timeout_sec": 0, 00:20:46.215 "reconnect_delay_sec": 0, 00:20:46.215 "fast_io_fail_timeout_sec": 0, 00:20:46.215 "disable_auto_failback": false, 00:20:46.215 "generate_uuids": false, 00:20:46.215 "transport_tos": 0, 00:20:46.215 "nvme_error_stat": false, 00:20:46.215 "rdma_srq_size": 0, 00:20:46.215 "io_path_stat": false, 00:20:46.215 "allow_accel_sequence": false, 00:20:46.215 "rdma_max_cq_size": 0, 00:20:46.215 "rdma_cm_event_timeout_ms": 0, 00:20:46.215 "dhchap_digests": [ 00:20:46.215 "sha256", 00:20:46.215 "sha384", 00:20:46.215 "sha512" 00:20:46.215 ], 00:20:46.215 "dhchap_dhgroups": [ 00:20:46.215 "null", 00:20:46.215 "ffdhe2048", 00:20:46.215 "ffdhe3072", 00:20:46.215 "ffdhe4096", 00:20:46.215 "ffdhe6144", 00:20:46.215 "ffdhe8192" 00:20:46.215 ] 00:20:46.215 } 00:20:46.215 }, 00:20:46.215 { 00:20:46.215 "method": "bdev_nvme_attach_controller", 00:20:46.215 "params": { 00:20:46.215 "name": "nvme0", 00:20:46.215 "trtype": "TCP", 00:20:46.215 "adrfam": "IPv4", 00:20:46.215 "traddr": "10.0.0.2", 00:20:46.215 "trsvcid": "4420", 00:20:46.215 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.215 "prchk_reftag": false, 00:20:46.215 "prchk_guard": false, 00:20:46.215 "ctrlr_loss_timeout_sec": 0, 00:20:46.215 "reconnect_delay_sec": 0, 00:20:46.215 "fast_io_fail_timeout_sec": 0, 00:20:46.215 "psk": "key0", 00:20:46.215 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.215 "hdgst": false, 00:20:46.215 "ddgst": false 00:20:46.215 } 00:20:46.215 }, 00:20:46.215 { 00:20:46.215 "method": "bdev_nvme_set_hotplug", 00:20:46.215 "params": { 00:20:46.215 "period_us": 100000, 00:20:46.215 "enable": false 00:20:46.215 } 00:20:46.215 }, 00:20:46.215 { 00:20:46.215 "method": "bdev_enable_histogram", 00:20:46.215 "params": { 00:20:46.215 "name": "nvme0n1", 00:20:46.215 "enable": true 00:20:46.215 } 00:20:46.215 }, 00:20:46.215 { 00:20:46.215 "method": "bdev_wait_for_examine" 00:20:46.215 } 00:20:46.215 ] 00:20:46.215 }, 00:20:46.215 { 00:20:46.215 "subsystem": "nbd", 00:20:46.215 "config": [] 00:20:46.215 } 00:20:46.215 ] 00:20:46.215 }' 00:20:46.215 01:23:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.215 [2024-05-15 01:23:21.725849] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:46.215 [2024-05-15 01:23:21.725900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4144526 ] 00:20:46.215 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.215 [2024-05-15 01:23:21.796280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.215 [2024-05-15 01:23:21.866931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.473 [2024-05-15 01:23:22.009789] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.037 01:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.037 01:23:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:20:47.037 01:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:47.037 01:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:47.037 01:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.037 01:23:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:47.295 Running I/O for 1 seconds... 00:20:48.229 00:20:48.229 Latency(us) 00:20:48.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.229 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:48.229 Verification LBA range: start 0x0 length 0x2000 00:20:48.229 nvme0n1 : 1.05 1732.50 6.77 0.00 0.00 72385.94 6973.03 101082.73 00:20:48.229 =================================================================================================================== 00:20:48.229 Total : 1732.50 6.77 0.00 0.00 72385.94 6973.03 101082.73 00:20:48.229 0 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:20:48.229 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:48.229 nvmf_trace.0 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 4144526 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4144526 ']' 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4144526 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4144526 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4144526' 00:20:48.486 killing process with pid 4144526 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4144526 00:20:48.486 Received shutdown signal, test time was about 1.000000 seconds 00:20:48.486 00:20:48.486 Latency(us) 00:20:48.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.486 =================================================================================================================== 00:20:48.486 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:48.486 01:23:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4144526 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.744 rmmod nvme_tcp 00:20:48.744 rmmod nvme_fabrics 00:20:48.744 rmmod nvme_keyring 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 4144251 ']' 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 4144251 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 4144251 ']' 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 4144251 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4144251 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4144251' 00:20:48.744 killing process with pid 4144251 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 4144251 00:20:48.744 [2024-05-15 01:23:24.301580] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:48.744 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 4144251 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.002 01:23:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.910 01:23:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.910 01:23:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Y6GuQNKis3 /tmp/tmp.XAxzz0LVxF /tmp/tmp.LPIwG7wSST 00:20:50.910 00:20:50.910 real 1m26.834s 00:20:50.910 user 2m7.787s 00:20:50.910 sys 0m34.514s 00:20:50.910 01:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:50.910 01:23:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.910 ************************************ 00:20:50.910 END TEST nvmf_tls 00:20:50.910 ************************************ 00:20:51.170 01:23:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:51.170 01:23:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:51.170 01:23:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:51.170 01:23:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.170 ************************************ 00:20:51.170 START TEST nvmf_fips 00:20:51.170 ************************************ 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:51.170 * Looking for test storage... 00:20:51.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.170 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:51.171 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:51.430 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:51.430 Error setting digest 00:20:51.430 0092D9A52A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:51.430 0092D9A52A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.431 01:23:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.431 01:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.431 01:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.431 01:23:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.431 01:23:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.000 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:58.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:58.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:58.001 Found net devices under 0000:af:00.0: cvl_0_0 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:58.001 Found net devices under 0000:af:00.1: cvl_0_1 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.001 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:20:58.264 00:20:58.264 --- 10.0.0.2 ping statistics --- 00:20:58.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.264 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:20:58.264 00:20:58.264 --- 10.0.0.1 ping statistics --- 00:20:58.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.264 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:58.264 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=4148600 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 4148600 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4148600 ']' 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:58.265 01:23:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:58.265 [2024-05-15 01:23:33.953297] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:58.265 [2024-05-15 01:23:33.953354] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.522 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.522 [2024-05-15 01:23:34.028240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.522 [2024-05-15 01:23:34.100714] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.522 [2024-05-15 01:23:34.100752] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.522 [2024-05-15 01:23:34.100763] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.522 [2024-05-15 01:23:34.100772] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.522 [2024-05-15 01:23:34.100779] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.522 [2024-05-15 01:23:34.100800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:59.088 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:59.345 [2024-05-15 01:23:34.927995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.345 [2024-05-15 01:23:34.943974] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:59.345 [2024-05-15 01:23:34.944012] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:59.345 [2024-05-15 01:23:34.944209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.345 [2024-05-15 01:23:34.972123] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:59.345 malloc0 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=4148816 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 4148816 /var/tmp/bdevperf.sock 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 4148816 ']' 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:59.345 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.346 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:59.346 01:23:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:59.603 [2024-05-15 01:23:35.053783] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:20:59.603 [2024-05-15 01:23:35.053838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4148816 ] 00:20:59.603 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.603 [2024-05-15 01:23:35.121269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.603 [2024-05-15 01:23:35.193739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.167 01:23:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:00.167 01:23:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:21:00.167 01:23:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:00.424 [2024-05-15 01:23:35.991406] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.424 [2024-05-15 01:23:35.991494] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.424 TLSTESTn1 00:21:00.424 01:23:36 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:00.681 Running I/O for 10 seconds... 00:21:10.678 00:21:10.678 Latency(us) 00:21:10.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.678 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:10.678 Verification LBA range: start 0x0 length 0x2000 00:21:10.678 TLSTESTn1 : 10.06 1958.98 7.65 0.00 0.00 65170.41 5295.31 121634.82 00:21:10.678 =================================================================================================================== 00:21:10.678 Total : 1958.98 7.65 0.00 0.00 65170.41 5295.31 121634.82 00:21:10.678 0 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:10.678 nvmf_trace.0 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 4148816 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4148816 ']' 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4148816 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:10.678 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4148816 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4148816' 00:21:10.935 killing process with pid 4148816 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4148816 00:21:10.935 Received shutdown signal, test time was about 10.000000 seconds 00:21:10.935 00:21:10.935 Latency(us) 00:21:10.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.935 =================================================================================================================== 00:21:10.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.935 [2024-05-15 01:23:46.412751] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4148816 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.935 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.935 rmmod nvme_tcp 00:21:11.193 rmmod nvme_fabrics 00:21:11.193 rmmod nvme_keyring 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 4148600 ']' 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 4148600 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 4148600 ']' 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 4148600 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4148600 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4148600' 00:21:11.193 killing process with pid 4148600 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 4148600 00:21:11.193 [2024-05-15 01:23:46.755958] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:11.193 [2024-05-15 01:23:46.755994] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:11.193 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 4148600 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.451 01:23:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.352 01:23:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.611 01:23:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:13.611 00:21:13.611 real 0m22.380s 00:21:13.611 user 0m22.441s 00:21:13.611 sys 0m10.891s 00:21:13.611 01:23:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:13.611 01:23:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:13.611 ************************************ 00:21:13.611 END TEST nvmf_fips 00:21:13.611 ************************************ 00:21:13.611 01:23:49 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:13.611 01:23:49 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:13.611 01:23:49 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:13.611 01:23:49 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:13.611 01:23:49 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.611 01:23:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.164 01:23:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.164 01:23:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.164 01:23:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.164 01:23:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:20.165 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:20.165 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:20.165 Found net devices under 0000:af:00.0: cvl_0_0 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:20.165 Found net devices under 0000:af:00.1: cvl_0_1 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:21:20.165 01:23:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:20.165 01:23:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:20.165 01:23:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:20.165 01:23:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:20.165 ************************************ 00:21:20.165 START TEST nvmf_perf_adq 00:21:20.165 ************************************ 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:20.165 * Looking for test storage... 00:21:20.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.165 01:23:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:20.166 01:23:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:26.722 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:26.722 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:26.722 Found net devices under 0000:af:00.0: cvl_0_0 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:26.722 Found net devices under 0000:af:00.1: cvl_0_1 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:21:26.722 01:24:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:28.097 01:24:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:30.628 01:24:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:35.897 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:35.897 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:35.897 Found net devices under 0000:af:00.0: cvl_0_0 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:35.897 Found net devices under 0000:af:00.1: cvl_0_1 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.897 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.898 01:24:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:21:35.898 00:21:35.898 --- 10.0.0.2 ping statistics --- 00:21:35.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.898 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:35.898 00:21:35.898 --- 10.0.0.1 ping statistics --- 00:21:35.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.898 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4159842 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4159842 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4159842 ']' 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:35.898 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:35.898 [2024-05-15 01:24:11.151233] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:21:35.898 [2024-05-15 01:24:11.151278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.898 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.898 [2024-05-15 01:24:11.225597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.898 [2024-05-15 01:24:11.300079] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.898 [2024-05-15 01:24:11.300122] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.898 [2024-05-15 01:24:11.300131] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.898 [2024-05-15 01:24:11.300140] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.898 [2024-05-15 01:24:11.300148] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.898 [2024-05-15 01:24:11.300201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.898 [2024-05-15 01:24:11.300266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.898 [2024-05-15 01:24:11.300358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.898 [2024-05-15 01:24:11.300364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.465 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:36.465 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:21:36.465 01:24:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.465 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.465 01:24:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.465 [2024-05-15 01:24:12.147760] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.465 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.738 Malloc1 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.738 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:36.739 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.739 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:36.739 [2024-05-15 01:24:12.194311] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:36.739 [2024-05-15 01:24:12.194593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.739 01:24:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.739 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=4160129 00:21:36.739 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:21:36.739 01:24:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:36.739 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.659 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:21:38.659 01:24:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.659 01:24:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:38.659 01:24:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.659 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:21:38.659 "tick_rate": 2500000000, 00:21:38.659 "poll_groups": [ 00:21:38.659 { 00:21:38.659 "name": "nvmf_tgt_poll_group_000", 00:21:38.659 "admin_qpairs": 1, 00:21:38.659 "io_qpairs": 1, 00:21:38.659 "current_admin_qpairs": 1, 00:21:38.659 "current_io_qpairs": 1, 00:21:38.659 "pending_bdev_io": 0, 00:21:38.659 "completed_nvme_io": 19309, 00:21:38.659 "transports": [ 00:21:38.659 { 00:21:38.659 "trtype": "TCP" 00:21:38.659 } 00:21:38.659 ] 00:21:38.659 }, 00:21:38.659 { 00:21:38.659 "name": "nvmf_tgt_poll_group_001", 00:21:38.659 "admin_qpairs": 0, 00:21:38.659 "io_qpairs": 1, 00:21:38.659 "current_admin_qpairs": 0, 00:21:38.659 "current_io_qpairs": 1, 00:21:38.659 "pending_bdev_io": 0, 00:21:38.659 "completed_nvme_io": 19037, 00:21:38.659 "transports": [ 00:21:38.659 { 00:21:38.659 "trtype": "TCP" 00:21:38.659 } 00:21:38.659 ] 00:21:38.659 }, 00:21:38.659 { 00:21:38.659 "name": "nvmf_tgt_poll_group_002", 00:21:38.659 "admin_qpairs": 0, 00:21:38.659 "io_qpairs": 1, 00:21:38.659 "current_admin_qpairs": 0, 00:21:38.659 "current_io_qpairs": 1, 00:21:38.659 "pending_bdev_io": 0, 00:21:38.659 "completed_nvme_io": 19348, 00:21:38.659 "transports": [ 00:21:38.659 { 00:21:38.659 "trtype": "TCP" 00:21:38.659 } 00:21:38.659 ] 00:21:38.659 }, 00:21:38.660 { 00:21:38.660 "name": "nvmf_tgt_poll_group_003", 00:21:38.660 "admin_qpairs": 0, 00:21:38.660 "io_qpairs": 1, 00:21:38.660 "current_admin_qpairs": 0, 00:21:38.660 "current_io_qpairs": 1, 00:21:38.660 "pending_bdev_io": 0, 00:21:38.660 "completed_nvme_io": 19028, 00:21:38.660 "transports": [ 00:21:38.660 { 00:21:38.660 "trtype": "TCP" 00:21:38.660 } 00:21:38.660 ] 00:21:38.660 } 00:21:38.660 ] 00:21:38.660 }' 00:21:38.660 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:38.660 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:21:38.660 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:21:38.660 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:21:38.660 01:24:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 4160129 00:21:46.769 Initializing NVMe Controllers 00:21:46.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:46.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:46.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:46.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:46.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:46.769 Initialization complete. Launching workers. 00:21:46.769 ======================================================== 00:21:46.769 Latency(us) 00:21:46.769 Device Information : IOPS MiB/s Average min max 00:21:46.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10032.23 39.19 6380.18 2486.75 10136.45 00:21:46.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9940.13 38.83 6438.83 2555.40 11353.28 00:21:46.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10101.23 39.46 6335.70 2477.46 11678.02 00:21:46.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9977.53 38.97 6414.09 2588.28 11609.42 00:21:46.769 ======================================================== 00:21:46.769 Total : 40051.12 156.45 6391.97 2477.46 11678.02 00:21:46.769 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:46.770 rmmod nvme_tcp 00:21:46.770 rmmod nvme_fabrics 00:21:46.770 rmmod nvme_keyring 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4159842 ']' 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4159842 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4159842 ']' 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4159842 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4159842 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4159842' 00:21:46.770 killing process with pid 4159842 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4159842 00:21:46.770 [2024-05-15 01:24:22.443745] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:46.770 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4159842 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.028 01:24:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.579 01:24:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.579 01:24:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:21:49.579 01:24:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:21:50.513 01:24:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:21:53.043 01:24:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:58.316 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:58.316 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.316 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:58.317 Found net devices under 0000:af:00.0: cvl_0_0 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:58.317 Found net devices under 0000:af:00.1: cvl_0_1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:58.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:21:58.317 00:21:58.317 --- 10.0.0.2 ping statistics --- 00:21:58.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.317 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:21:58.317 00:21:58.317 --- 10.0.0.1 ping statistics --- 00:21:58.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.317 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:58.317 net.core.busy_poll = 1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:58.317 net.core.busy_read = 1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=4163972 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 4163972 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 4163972 ']' 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:58.317 01:24:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:58.317 [2024-05-15 01:24:33.806971] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:21:58.317 [2024-05-15 01:24:33.807018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.317 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.317 [2024-05-15 01:24:33.878845] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.317 [2024-05-15 01:24:33.952037] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.317 [2024-05-15 01:24:33.952074] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.317 [2024-05-15 01:24:33.952083] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.317 [2024-05-15 01:24:33.952091] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.317 [2024-05-15 01:24:33.952098] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.317 [2024-05-15 01:24:33.955211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.317 [2024-05-15 01:24:33.955229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.317 [2024-05-15 01:24:33.955311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.317 [2024-05-15 01:24:33.955314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 [2024-05-15 01:24:34.795965] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 Malloc1 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:59.252 [2024-05-15 01:24:34.846425] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:59.252 [2024-05-15 01:24:34.846677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=4164254 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:59.252 01:24:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:59.252 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.784 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:01.784 01:24:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.784 01:24:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:01.784 01:24:36 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.784 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:01.784 "tick_rate": 2500000000, 00:22:01.784 "poll_groups": [ 00:22:01.784 { 00:22:01.784 "name": "nvmf_tgt_poll_group_000", 00:22:01.784 "admin_qpairs": 1, 00:22:01.784 "io_qpairs": 0, 00:22:01.784 "current_admin_qpairs": 1, 00:22:01.784 "current_io_qpairs": 0, 00:22:01.784 "pending_bdev_io": 0, 00:22:01.784 "completed_nvme_io": 0, 00:22:01.784 "transports": [ 00:22:01.784 { 00:22:01.784 "trtype": "TCP" 00:22:01.784 } 00:22:01.784 ] 00:22:01.784 }, 00:22:01.784 { 00:22:01.784 "name": "nvmf_tgt_poll_group_001", 00:22:01.784 "admin_qpairs": 0, 00:22:01.784 "io_qpairs": 4, 00:22:01.784 "current_admin_qpairs": 0, 00:22:01.784 "current_io_qpairs": 4, 00:22:01.784 "pending_bdev_io": 0, 00:22:01.784 "completed_nvme_io": 45790, 00:22:01.784 "transports": [ 00:22:01.784 { 00:22:01.784 "trtype": "TCP" 00:22:01.784 } 00:22:01.784 ] 00:22:01.785 }, 00:22:01.785 { 00:22:01.785 "name": "nvmf_tgt_poll_group_002", 00:22:01.785 "admin_qpairs": 0, 00:22:01.785 "io_qpairs": 0, 00:22:01.785 "current_admin_qpairs": 0, 00:22:01.785 "current_io_qpairs": 0, 00:22:01.785 "pending_bdev_io": 0, 00:22:01.785 "completed_nvme_io": 0, 00:22:01.785 "transports": [ 00:22:01.785 { 00:22:01.785 "trtype": "TCP" 00:22:01.785 } 00:22:01.785 ] 00:22:01.785 }, 00:22:01.785 { 00:22:01.785 "name": "nvmf_tgt_poll_group_003", 00:22:01.785 "admin_qpairs": 0, 00:22:01.785 "io_qpairs": 0, 00:22:01.785 "current_admin_qpairs": 0, 00:22:01.785 "current_io_qpairs": 0, 00:22:01.785 "pending_bdev_io": 0, 00:22:01.785 "completed_nvme_io": 0, 00:22:01.785 "transports": [ 00:22:01.785 { 00:22:01.785 "trtype": "TCP" 00:22:01.785 } 00:22:01.785 ] 00:22:01.785 } 00:22:01.785 ] 00:22:01.785 }' 00:22:01.785 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:01.785 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:01.785 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:22:01.785 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:22:01.785 01:24:36 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 4164254 00:22:09.956 Initializing NVMe Controllers 00:22:09.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:09.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:09.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:09.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:09.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:09.956 Initialization complete. Launching workers. 00:22:09.956 ======================================================== 00:22:09.956 Latency(us) 00:22:09.956 Device Information : IOPS MiB/s Average min max 00:22:09.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5759.10 22.50 11113.52 1615.59 55722.41 00:22:09.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6325.10 24.71 10120.66 1565.17 55734.28 00:22:09.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5843.00 22.82 10954.52 1902.58 54940.43 00:22:09.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5887.20 23.00 10897.25 1902.49 56995.13 00:22:09.956 ======================================================== 00:22:09.956 Total : 23814.40 93.03 10757.34 1565.17 56995.13 00:22:09.956 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.956 01:24:44 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.956 rmmod nvme_tcp 00:22:09.956 rmmod nvme_fabrics 00:22:09.956 rmmod nvme_keyring 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 4163972 ']' 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 4163972 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 4163972 ']' 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 4163972 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4163972 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4163972' 00:22:09.956 killing process with pid 4163972 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 4163972 00:22:09.956 [2024-05-15 01:24:45.115951] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 4163972 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.956 01:24:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.244 01:24:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:13.244 01:24:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:13.244 00:22:13.244 real 0m52.768s 00:22:13.244 user 2m45.445s 00:22:13.244 sys 0m14.706s 00:22:13.244 01:24:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:13.244 01:24:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.244 ************************************ 00:22:13.244 END TEST nvmf_perf_adq 00:22:13.244 ************************************ 00:22:13.244 01:24:48 nvmf_tcp -- nvmf/nvmf.sh@82 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:13.244 01:24:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:13.244 01:24:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:13.244 01:24:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:13.244 ************************************ 00:22:13.244 START TEST nvmf_shutdown 00:22:13.244 ************************************ 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:13.244 * Looking for test storage... 00:22:13.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:13.244 ************************************ 00:22:13.244 START TEST nvmf_shutdown_tc1 00:22:13.244 ************************************ 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:13.244 01:24:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:19.809 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:19.809 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.809 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:19.809 Found net devices under 0000:af:00.0: cvl_0_0 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:19.810 Found net devices under 0000:af:00.1: cvl_0_1 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:19.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:22:19.810 00:22:19.810 --- 10.0.0.2 ping statistics --- 00:22:19.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.810 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:22:19.810 00:22:19.810 --- 10.0.0.1 ping statistics --- 00:22:19.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.810 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=4169911 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 4169911 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4169911 ']' 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:19.810 01:24:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:19.810 [2024-05-15 01:24:55.475048] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:19.810 [2024-05-15 01:24:55.475095] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.069 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.069 [2024-05-15 01:24:55.550630] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.069 [2024-05-15 01:24:55.623507] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.069 [2024-05-15 01:24:55.623544] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.069 [2024-05-15 01:24:55.623553] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.069 [2024-05-15 01:24:55.623561] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.069 [2024-05-15 01:24:55.623584] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.070 [2024-05-15 01:24:55.623627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.070 [2024-05-15 01:24:55.623710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.070 [2024-05-15 01:24:55.623809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.070 [2024-05-15 01:24:55.623809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.637 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.896 [2024-05-15 01:24:56.332986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.896 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.897 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:20.897 Malloc1 00:22:20.897 [2024-05-15 01:24:56.443479] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:20.897 [2024-05-15 01:24:56.443725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.897 Malloc2 00:22:20.897 Malloc3 00:22:20.897 Malloc4 00:22:21.156 Malloc5 00:22:21.156 Malloc6 00:22:21.156 Malloc7 00:22:21.156 Malloc8 00:22:21.156 Malloc9 00:22:21.156 Malloc10 00:22:21.156 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.156 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:21.156 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:21.156 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=4170232 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 4170232 /var/tmp/bdevperf.sock 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 4170232 ']' 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:21.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 [2024-05-15 01:24:56.925496] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:21.416 [2024-05-15 01:24:56.925551] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.416 { 00:22:21.416 "params": { 00:22:21.416 "name": "Nvme$subsystem", 00:22:21.416 "trtype": "$TEST_TRANSPORT", 00:22:21.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.416 "adrfam": "ipv4", 00:22:21.416 "trsvcid": "$NVMF_PORT", 00:22:21.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.416 "hdgst": ${hdgst:-false}, 00:22:21.416 "ddgst": ${ddgst:-false} 00:22:21.416 }, 00:22:21.416 "method": "bdev_nvme_attach_controller" 00:22:21.416 } 00:22:21.416 EOF 00:22:21.416 )") 00:22:21.416 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.417 { 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme$subsystem", 00:22:21.417 "trtype": "$TEST_TRANSPORT", 00:22:21.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "$NVMF_PORT", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.417 "hdgst": ${hdgst:-false}, 00:22:21.417 "ddgst": ${ddgst:-false} 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 } 00:22:21.417 EOF 00:22:21.417 )") 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.417 { 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme$subsystem", 00:22:21.417 "trtype": "$TEST_TRANSPORT", 00:22:21.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "$NVMF_PORT", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.417 "hdgst": ${hdgst:-false}, 00:22:21.417 "ddgst": ${ddgst:-false} 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 } 00:22:21.417 EOF 00:22:21.417 )") 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:21.417 { 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme$subsystem", 00:22:21.417 "trtype": "$TEST_TRANSPORT", 00:22:21.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "$NVMF_PORT", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:21.417 "hdgst": ${hdgst:-false}, 00:22:21.417 "ddgst": ${ddgst:-false} 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 } 00:22:21.417 EOF 00:22:21.417 )") 00:22:21.417 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:21.417 01:24:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme1", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme2", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme3", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme4", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme5", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme6", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme7", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme8", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme9", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 },{ 00:22:21.417 "params": { 00:22:21.417 "name": "Nvme10", 00:22:21.417 "trtype": "tcp", 00:22:21.417 "traddr": "10.0.0.2", 00:22:21.417 "adrfam": "ipv4", 00:22:21.417 "trsvcid": "4420", 00:22:21.417 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:21.417 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:21.417 "hdgst": false, 00:22:21.417 "ddgst": false 00:22:21.417 }, 00:22:21.417 "method": "bdev_nvme_attach_controller" 00:22:21.417 }' 00:22:21.417 [2024-05-15 01:24:56.998654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:21.417 [2024-05-15 01:24:57.067009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 4170232 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:23.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 4170232 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:23.322 01:24:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:22:23.890 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 4169911 00:22:23.890 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:23.890 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:23.890 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:23.890 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:23.890 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.891 { 00:22:23.891 "params": { 00:22:23.891 "name": "Nvme$subsystem", 00:22:23.891 "trtype": "$TEST_TRANSPORT", 00:22:23.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.891 "adrfam": "ipv4", 00:22:23.891 "trsvcid": "$NVMF_PORT", 00:22:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.891 "hdgst": ${hdgst:-false}, 00:22:23.891 "ddgst": ${ddgst:-false} 00:22:23.891 }, 00:22:23.891 "method": "bdev_nvme_attach_controller" 00:22:23.891 } 00:22:23.891 EOF 00:22:23.891 )") 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.891 { 00:22:23.891 "params": { 00:22:23.891 "name": "Nvme$subsystem", 00:22:23.891 "trtype": "$TEST_TRANSPORT", 00:22:23.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.891 "adrfam": "ipv4", 00:22:23.891 "trsvcid": "$NVMF_PORT", 00:22:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.891 "hdgst": ${hdgst:-false}, 00:22:23.891 "ddgst": ${ddgst:-false} 00:22:23.891 }, 00:22:23.891 "method": "bdev_nvme_attach_controller" 00:22:23.891 } 00:22:23.891 EOF 00:22:23.891 )") 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.891 { 00:22:23.891 "params": { 00:22:23.891 "name": "Nvme$subsystem", 00:22:23.891 "trtype": "$TEST_TRANSPORT", 00:22:23.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.891 "adrfam": "ipv4", 00:22:23.891 "trsvcid": "$NVMF_PORT", 00:22:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.891 "hdgst": ${hdgst:-false}, 00:22:23.891 "ddgst": ${ddgst:-false} 00:22:23.891 }, 00:22:23.891 "method": "bdev_nvme_attach_controller" 00:22:23.891 } 00:22:23.891 EOF 00:22:23.891 )") 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.891 { 00:22:23.891 "params": { 00:22:23.891 "name": "Nvme$subsystem", 00:22:23.891 "trtype": "$TEST_TRANSPORT", 00:22:23.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.891 "adrfam": "ipv4", 00:22:23.891 "trsvcid": "$NVMF_PORT", 00:22:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.891 "hdgst": ${hdgst:-false}, 00:22:23.891 "ddgst": ${ddgst:-false} 00:22:23.891 }, 00:22:23.891 "method": "bdev_nvme_attach_controller" 00:22:23.891 } 00:22:23.891 EOF 00:22:23.891 )") 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:23.891 { 00:22:23.891 "params": { 00:22:23.891 "name": "Nvme$subsystem", 00:22:23.891 "trtype": "$TEST_TRANSPORT", 00:22:23.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.891 "adrfam": "ipv4", 00:22:23.891 "trsvcid": "$NVMF_PORT", 00:22:23.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.891 "hdgst": ${hdgst:-false}, 00:22:23.891 "ddgst": ${ddgst:-false} 00:22:23.891 }, 00:22:23.891 "method": "bdev_nvme_attach_controller" 00:22:23.891 } 00:22:23.891 EOF 00:22:23.891 )") 00:22:23.891 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.151 { 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme$subsystem", 00:22:24.151 "trtype": "$TEST_TRANSPORT", 00:22:24.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.151 "adrfam": "ipv4", 00:22:24.151 "trsvcid": "$NVMF_PORT", 00:22:24.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.151 "hdgst": ${hdgst:-false}, 00:22:24.151 "ddgst": ${ddgst:-false} 00:22:24.151 }, 00:22:24.151 "method": "bdev_nvme_attach_controller" 00:22:24.151 } 00:22:24.151 EOF 00:22:24.151 )") 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:24.151 [2024-05-15 01:24:59.592170] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:24.151 [2024-05-15 01:24:59.592233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4170552 ] 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.151 { 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme$subsystem", 00:22:24.151 "trtype": "$TEST_TRANSPORT", 00:22:24.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.151 "adrfam": "ipv4", 00:22:24.151 "trsvcid": "$NVMF_PORT", 00:22:24.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.151 "hdgst": ${hdgst:-false}, 00:22:24.151 "ddgst": ${ddgst:-false} 00:22:24.151 }, 00:22:24.151 "method": "bdev_nvme_attach_controller" 00:22:24.151 } 00:22:24.151 EOF 00:22:24.151 )") 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.151 { 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme$subsystem", 00:22:24.151 "trtype": "$TEST_TRANSPORT", 00:22:24.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.151 "adrfam": "ipv4", 00:22:24.151 "trsvcid": "$NVMF_PORT", 00:22:24.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.151 "hdgst": ${hdgst:-false}, 00:22:24.151 "ddgst": ${ddgst:-false} 00:22:24.151 }, 00:22:24.151 "method": "bdev_nvme_attach_controller" 00:22:24.151 } 00:22:24.151 EOF 00:22:24.151 )") 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.151 { 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme$subsystem", 00:22:24.151 "trtype": "$TEST_TRANSPORT", 00:22:24.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.151 "adrfam": "ipv4", 00:22:24.151 "trsvcid": "$NVMF_PORT", 00:22:24.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.151 "hdgst": ${hdgst:-false}, 00:22:24.151 "ddgst": ${ddgst:-false} 00:22:24.151 }, 00:22:24.151 "method": "bdev_nvme_attach_controller" 00:22:24.151 } 00:22:24.151 EOF 00:22:24.151 )") 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:24.151 { 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme$subsystem", 00:22:24.151 "trtype": "$TEST_TRANSPORT", 00:22:24.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:24.151 "adrfam": "ipv4", 00:22:24.151 "trsvcid": "$NVMF_PORT", 00:22:24.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:24.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:24.151 "hdgst": ${hdgst:-false}, 00:22:24.151 "ddgst": ${ddgst:-false} 00:22:24.151 }, 00:22:24.151 "method": "bdev_nvme_attach_controller" 00:22:24.151 } 00:22:24.151 EOF 00:22:24.151 )") 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:24.151 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:24.151 01:24:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme1", 00:22:24.151 "trtype": "tcp", 00:22:24.151 "traddr": "10.0.0.2", 00:22:24.151 "adrfam": "ipv4", 00:22:24.151 "trsvcid": "4420", 00:22:24.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:24.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:24.151 "hdgst": false, 00:22:24.151 "ddgst": false 00:22:24.151 }, 00:22:24.151 "method": "bdev_nvme_attach_controller" 00:22:24.151 },{ 00:22:24.151 "params": { 00:22:24.151 "name": "Nvme2", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme3", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme4", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme5", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme6", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme7", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme8", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme9", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 },{ 00:22:24.152 "params": { 00:22:24.152 "name": "Nvme10", 00:22:24.152 "trtype": "tcp", 00:22:24.152 "traddr": "10.0.0.2", 00:22:24.152 "adrfam": "ipv4", 00:22:24.152 "trsvcid": "4420", 00:22:24.152 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:24.152 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:24.152 "hdgst": false, 00:22:24.152 "ddgst": false 00:22:24.152 }, 00:22:24.152 "method": "bdev_nvme_attach_controller" 00:22:24.152 }' 00:22:24.152 [2024-05-15 01:24:59.666430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.152 [2024-05-15 01:24:59.737048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.056 Running I/O for 1 seconds... 00:22:26.994 00:22:26.994 Latency(us) 00:22:26.994 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.994 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme1n1 : 1.06 302.91 18.93 0.00 0.00 209219.09 19713.23 203004.31 00:22:26.994 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme2n1 : 1.10 233.68 14.60 0.00 0.00 267697.36 21286.09 253335.96 00:22:26.994 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme3n1 : 1.05 244.35 15.27 0.00 0.00 252004.15 19188.94 251658.24 00:22:26.994 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme4n1 : 1.06 305.44 19.09 0.00 0.00 198572.79 17406.36 197132.29 00:22:26.994 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme5n1 : 1.14 224.41 14.03 0.00 0.00 256719.67 20447.23 249980.52 00:22:26.994 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme6n1 : 1.14 279.79 17.49 0.00 0.00 211929.33 19398.66 223136.97 00:22:26.994 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme7n1 : 1.12 286.07 17.88 0.00 0.00 203926.73 18874.37 210554.06 00:22:26.994 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme8n1 : 1.16 330.09 20.63 0.00 0.00 174712.83 18245.22 174483.05 00:22:26.994 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme9n1 : 1.17 328.58 20.54 0.00 0.00 173356.10 10643.05 206359.76 00:22:26.994 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:26.994 Verification LBA range: start 0x0 length 0x400 00:22:26.994 Nvme10n1 : 1.18 324.22 20.26 0.00 0.00 173358.56 7444.89 206359.76 00:22:26.994 =================================================================================================================== 00:22:26.994 Total : 2859.55 178.72 0.00 0.00 207018.19 7444.89 253335.96 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.254 rmmod nvme_tcp 00:22:27.254 rmmod nvme_fabrics 00:22:27.254 rmmod nvme_keyring 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 4169911 ']' 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 4169911 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 4169911 ']' 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 4169911 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4169911 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4169911' 00:22:27.254 killing process with pid 4169911 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 4169911 00:22:27.254 [2024-05-15 01:25:02.860994] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:27.254 01:25:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 4169911 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.823 01:25:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.789 00:22:29.789 real 0m16.651s 00:22:29.789 user 0m36.019s 00:22:29.789 sys 0m6.948s 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:29.789 ************************************ 00:22:29.789 END TEST nvmf_shutdown_tc1 00:22:29.789 ************************************ 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:29.789 ************************************ 00:22:29.789 START TEST nvmf_shutdown_tc2 00:22:29.789 ************************************ 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:29.789 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:29.789 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.789 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:29.790 Found net devices under 0000:af:00.0: cvl_0_0 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:29.790 Found net devices under 0000:af:00.1: cvl_0_1 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:29.790 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.048 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.048 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.048 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.048 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:30.048 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.048 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:30.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:22:30.307 00:22:30.307 --- 10.0.0.2 ping statistics --- 00:22:30.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.307 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:22:30.307 00:22:30.307 --- 10.0.0.1 ping statistics --- 00:22:30.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.307 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=4171711 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 4171711 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4171711 ']' 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:30.307 01:25:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:30.307 [2024-05-15 01:25:05.880362] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:30.307 [2024-05-15 01:25:05.880412] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:30.307 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.307 [2024-05-15 01:25:05.955908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.565 [2024-05-15 01:25:06.030592] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.565 [2024-05-15 01:25:06.030626] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.565 [2024-05-15 01:25:06.030635] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.565 [2024-05-15 01:25:06.030643] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.565 [2024-05-15 01:25:06.030650] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.565 [2024-05-15 01:25:06.030750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.565 [2024-05-15 01:25:06.030831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.565 [2024-05-15 01:25:06.030944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:30.565 [2024-05-15 01:25:06.030945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.131 [2024-05-15 01:25:06.735124] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:31.131 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:31.132 01:25:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.391 Malloc1 00:22:31.391 [2024-05-15 01:25:06.849823] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:31.391 [2024-05-15 01:25:06.850066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.391 Malloc2 00:22:31.391 Malloc3 00:22:31.391 Malloc4 00:22:31.391 Malloc5 00:22:31.391 Malloc6 00:22:31.391 Malloc7 00:22:31.650 Malloc8 00:22:31.650 Malloc9 00:22:31.650 Malloc10 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=4172034 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 4172034 /var/tmp/bdevperf.sock 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 4172034 ']' 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:31.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 [2024-05-15 01:25:07.332060] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:31.650 [2024-05-15 01:25:07.332114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4172034 ] 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.650 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.650 { 00:22:31.650 "params": { 00:22:31.650 "name": "Nvme$subsystem", 00:22:31.650 "trtype": "$TEST_TRANSPORT", 00:22:31.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.650 "adrfam": "ipv4", 00:22:31.650 "trsvcid": "$NVMF_PORT", 00:22:31.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.650 "hdgst": ${hdgst:-false}, 00:22:31.650 "ddgst": ${ddgst:-false} 00:22:31.650 }, 00:22:31.650 "method": "bdev_nvme_attach_controller" 00:22:31.650 } 00:22:31.650 EOF 00:22:31.650 )") 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.910 { 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme$subsystem", 00:22:31.910 "trtype": "$TEST_TRANSPORT", 00:22:31.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "$NVMF_PORT", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.910 "hdgst": ${hdgst:-false}, 00:22:31.910 "ddgst": ${ddgst:-false} 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 } 00:22:31.910 EOF 00:22:31.910 )") 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:31.910 { 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme$subsystem", 00:22:31.910 "trtype": "$TEST_TRANSPORT", 00:22:31.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "$NVMF_PORT", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:31.910 "hdgst": ${hdgst:-false}, 00:22:31.910 "ddgst": ${ddgst:-false} 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 } 00:22:31.910 EOF 00:22:31.910 )") 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:22:31.910 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:22:31.910 01:25:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme1", 00:22:31.910 "trtype": "tcp", 00:22:31.910 "traddr": "10.0.0.2", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "4420", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.910 "hdgst": false, 00:22:31.910 "ddgst": false 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 },{ 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme2", 00:22:31.910 "trtype": "tcp", 00:22:31.910 "traddr": "10.0.0.2", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "4420", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:31.910 "hdgst": false, 00:22:31.910 "ddgst": false 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 },{ 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme3", 00:22:31.910 "trtype": "tcp", 00:22:31.910 "traddr": "10.0.0.2", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "4420", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:31.910 "hdgst": false, 00:22:31.910 "ddgst": false 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 },{ 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme4", 00:22:31.910 "trtype": "tcp", 00:22:31.910 "traddr": "10.0.0.2", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "4420", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:31.910 "hdgst": false, 00:22:31.910 "ddgst": false 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 },{ 00:22:31.910 "params": { 00:22:31.910 "name": "Nvme5", 00:22:31.910 "trtype": "tcp", 00:22:31.910 "traddr": "10.0.0.2", 00:22:31.910 "adrfam": "ipv4", 00:22:31.910 "trsvcid": "4420", 00:22:31.910 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:31.910 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:31.910 "hdgst": false, 00:22:31.910 "ddgst": false 00:22:31.910 }, 00:22:31.910 "method": "bdev_nvme_attach_controller" 00:22:31.910 },{ 00:22:31.911 "params": { 00:22:31.911 "name": "Nvme6", 00:22:31.911 "trtype": "tcp", 00:22:31.911 "traddr": "10.0.0.2", 00:22:31.911 "adrfam": "ipv4", 00:22:31.911 "trsvcid": "4420", 00:22:31.911 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:31.911 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:31.911 "hdgst": false, 00:22:31.911 "ddgst": false 00:22:31.911 }, 00:22:31.911 "method": "bdev_nvme_attach_controller" 00:22:31.911 },{ 00:22:31.911 "params": { 00:22:31.911 "name": "Nvme7", 00:22:31.911 "trtype": "tcp", 00:22:31.911 "traddr": "10.0.0.2", 00:22:31.911 "adrfam": "ipv4", 00:22:31.911 "trsvcid": "4420", 00:22:31.911 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:31.911 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:31.911 "hdgst": false, 00:22:31.911 "ddgst": false 00:22:31.911 }, 00:22:31.911 "method": "bdev_nvme_attach_controller" 00:22:31.911 },{ 00:22:31.911 "params": { 00:22:31.911 "name": "Nvme8", 00:22:31.911 "trtype": "tcp", 00:22:31.911 "traddr": "10.0.0.2", 00:22:31.911 "adrfam": "ipv4", 00:22:31.911 "trsvcid": "4420", 00:22:31.911 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:31.911 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:31.911 "hdgst": false, 00:22:31.911 "ddgst": false 00:22:31.911 }, 00:22:31.911 "method": "bdev_nvme_attach_controller" 00:22:31.911 },{ 00:22:31.911 "params": { 00:22:31.911 "name": "Nvme9", 00:22:31.911 "trtype": "tcp", 00:22:31.911 "traddr": "10.0.0.2", 00:22:31.911 "adrfam": "ipv4", 00:22:31.911 "trsvcid": "4420", 00:22:31.911 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:31.911 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:31.911 "hdgst": false, 00:22:31.911 "ddgst": false 00:22:31.911 }, 00:22:31.911 "method": "bdev_nvme_attach_controller" 00:22:31.911 },{ 00:22:31.911 "params": { 00:22:31.911 "name": "Nvme10", 00:22:31.911 "trtype": "tcp", 00:22:31.911 "traddr": "10.0.0.2", 00:22:31.911 "adrfam": "ipv4", 00:22:31.911 "trsvcid": "4420", 00:22:31.911 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:31.911 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:31.911 "hdgst": false, 00:22:31.911 "ddgst": false 00:22:31.911 }, 00:22:31.911 "method": "bdev_nvme_attach_controller" 00:22:31.911 }' 00:22:31.911 [2024-05-15 01:25:07.403369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.911 [2024-05-15 01:25:07.472616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.287 Running I/O for 10 seconds... 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.287 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.546 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:22:33.546 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:22:33.546 01:25:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:22:33.805 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 4172034 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4172034 ']' 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4172034 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4172034 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4172034' 00:22:34.064 killing process with pid 4172034 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4172034 00:22:34.064 01:25:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4172034 00:22:34.323 Received shutdown signal, test time was about 0.988209 seconds 00:22:34.323 00:22:34.323 Latency(us) 00:22:34.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:34.323 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.323 Verification LBA range: start 0x0 length 0x400 00:22:34.323 Nvme1n1 : 0.91 282.15 17.63 0.00 0.00 224216.68 20447.23 208876.34 00:22:34.323 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.323 Verification LBA range: start 0x0 length 0x400 00:22:34.323 Nvme2n1 : 0.99 259.22 16.20 0.00 0.00 231378.74 22020.10 255013.68 00:22:34.323 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.323 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme3n1 : 0.91 353.18 22.07 0.00 0.00 173110.56 18350.08 181193.93 00:22:34.324 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme4n1 : 0.95 202.43 12.65 0.00 0.00 298316.60 20552.09 308700.77 00:22:34.324 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme5n1 : 0.90 285.47 17.84 0.00 0.00 206731.47 20237.52 216426.09 00:22:34.324 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme6n1 : 0.96 268.00 16.75 0.00 0.00 217915.80 19188.94 243269.63 00:22:34.324 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme7n1 : 0.91 209.93 13.12 0.00 0.00 271858.62 31457.28 241591.91 00:22:34.324 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme8n1 : 0.94 272.08 17.00 0.00 0.00 206818.92 15728.64 233203.30 00:22:34.324 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme9n1 : 0.95 268.95 16.81 0.00 0.00 205858.00 20447.23 226492.42 00:22:34.324 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:34.324 Verification LBA range: start 0x0 length 0x400 00:22:34.324 Nvme10n1 : 0.93 274.45 17.15 0.00 0.00 197354.29 20552.09 205520.90 00:22:34.324 =================================================================================================================== 00:22:34.324 Total : 2675.86 167.24 0.00 0.00 218901.90 15728.64 308700.77 00:22:34.324 01:25:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 4171711 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:35.698 rmmod nvme_tcp 00:22:35.698 rmmod nvme_fabrics 00:22:35.698 rmmod nvme_keyring 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 4171711 ']' 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 4171711 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 4171711 ']' 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 4171711 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4171711 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4171711' 00:22:35.698 killing process with pid 4171711 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 4171711 00:22:35.698 [2024-05-15 01:25:11.171667] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:35.698 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 4171711 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.957 01:25:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.494 00:22:38.494 real 0m8.206s 00:22:38.494 user 0m24.413s 00:22:38.494 sys 0m1.772s 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:38.494 ************************************ 00:22:38.494 END TEST nvmf_shutdown_tc2 00:22:38.494 ************************************ 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:38.494 ************************************ 00:22:38.494 START TEST nvmf_shutdown_tc3 00:22:38.494 ************************************ 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.494 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:38.495 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:38.495 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:38.495 Found net devices under 0000:af:00.0: cvl_0_0 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:38.495 Found net devices under 0000:af:00.1: cvl_0_1 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.495 01:25:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:22:38.495 00:22:38.495 --- 10.0.0.2 ping statistics --- 00:22:38.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.495 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:22:38.495 00:22:38.495 --- 10.0.0.1 ping statistics --- 00:22:38.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.495 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.495 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=4173281 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 4173281 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4173281 ']' 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:38.496 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:38.496 [2024-05-15 01:25:14.146274] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:38.496 [2024-05-15 01:25:14.146320] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.496 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.754 [2024-05-15 01:25:14.219887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.754 [2024-05-15 01:25:14.294510] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.754 [2024-05-15 01:25:14.294547] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.754 [2024-05-15 01:25:14.294556] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.754 [2024-05-15 01:25:14.294565] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.754 [2024-05-15 01:25:14.294572] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.754 [2024-05-15 01:25:14.294671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.754 [2024-05-15 01:25:14.294757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.754 [2024-05-15 01:25:14.294789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.754 [2024-05-15 01:25:14.294791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.323 01:25:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.323 [2024-05-15 01:25:15.000973] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.323 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.323 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:39.323 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:39.323 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:39.323 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:39.583 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:39.583 Malloc1 00:22:39.583 [2024-05-15 01:25:15.111480] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:39.583 [2024-05-15 01:25:15.111728] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.583 Malloc2 00:22:39.583 Malloc3 00:22:39.583 Malloc4 00:22:39.583 Malloc5 00:22:39.842 Malloc6 00:22:39.842 Malloc7 00:22:39.842 Malloc8 00:22:39.842 Malloc9 00:22:39.842 Malloc10 00:22:39.842 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:39.842 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:39.842 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.842 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=4173548 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 4173548 /var/tmp/bdevperf.sock 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 4173548 ']' 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:40.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 [2024-05-15 01:25:15.594091] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:40.101 [2024-05-15 01:25:15.594145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4173548 ] 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:40.101 { 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme$subsystem", 00:22:40.101 "trtype": "$TEST_TRANSPORT", 00:22:40.101 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "$NVMF_PORT", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.101 "hdgst": ${hdgst:-false}, 00:22:40.101 "ddgst": ${ddgst:-false} 00:22:40.101 }, 00:22:40.101 "method": "bdev_nvme_attach_controller" 00:22:40.101 } 00:22:40.101 EOF 00:22:40.101 )") 00:22:40.101 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:22:40.101 01:25:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:40.101 "params": { 00:22:40.101 "name": "Nvme1", 00:22:40.101 "trtype": "tcp", 00:22:40.101 "traddr": "10.0.0.2", 00:22:40.101 "adrfam": "ipv4", 00:22:40.101 "trsvcid": "4420", 00:22:40.101 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.101 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme2", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme3", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme4", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme5", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme6", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme7", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme8", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme9", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 },{ 00:22:40.102 "params": { 00:22:40.102 "name": "Nvme10", 00:22:40.102 "trtype": "tcp", 00:22:40.102 "traddr": "10.0.0.2", 00:22:40.102 "adrfam": "ipv4", 00:22:40.102 "trsvcid": "4420", 00:22:40.102 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:40.102 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:40.102 "hdgst": false, 00:22:40.102 "ddgst": false 00:22:40.102 }, 00:22:40.102 "method": "bdev_nvme_attach_controller" 00:22:40.102 }' 00:22:40.102 [2024-05-15 01:25:15.666855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.102 [2024-05-15 01:25:15.736198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.007 Running I/O for 10 seconds... 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 4173281 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 4173281 ']' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 4173281 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4173281 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4173281' 00:22:42.588 killing process with pid 4173281 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 4173281 00:22:42.588 [2024-05-15 01:25:18.221329] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:42.588 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 4173281 00:22:42.588 [2024-05-15 01:25:18.228174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c7d00 is same with [2024-05-15 01:25:18.228355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:22:42.588 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.588 [2024-05-15 01:25:18.228642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.588 [2024-05-15 01:25:18.228653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.228989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.228998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.589 [2024-05-15 01:25:18.229454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.589 [2024-05-15 01:25:18.229463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.229474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.229483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.229493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.229502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.229513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.229522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.229590] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x249a3b0 was disconnected and freed. reset controller. 00:22:42.590 [2024-05-15 01:25:18.230840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230866] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230902] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230948] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230965] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230974] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230983] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.230991] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231017] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231026] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231060] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231156] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231206] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231285] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231302] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231319] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231354] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231363] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13ae760 is same with the state(5) to be set 00:22:42.590 [2024-05-15 01:25:18.231393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.231418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.231433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.231443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.231454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.231464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.231474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.231484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.231495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.231504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.590 [2024-05-15 01:25:18.231515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.590 [2024-05-15 01:25:18.231524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.231988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.231999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.591 [2024-05-15 01:25:18.232311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.591 [2024-05-15 01:25:18.232322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.232679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.592 [2024-05-15 01:25:18.232689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233087] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2493620 was disconnected and freed. reset controller. 00:22:42.592 [2024-05-15 01:25:18.233109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.592 [2024-05-15 01:25:18.233156] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236b9f0 (9): Bad file descriptor 00:22:42.592 [2024-05-15 01:25:18.233198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2536240 is same with the state(5) to be set 00:22:42.592 [2024-05-15 01:25:18.233308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.592 [2024-05-15 01:25:18.233375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.592 [2024-05-15 01:25:18.233384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9530 is same with the state(5) to be set 00:22:42.592 [2024-05-15 01:25:18.234631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:42.592 [2024-05-15 01:25:18.234659] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a9530 (9): Bad file descriptor 00:22:42.592 [2024-05-15 01:25:18.234814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.592 [2024-05-15 01:25:18.234845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.592 [2024-05-15 01:25:18.234856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.592 [2024-05-15 01:25:18.234865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.592 [2024-05-15 01:25:18.234874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234910] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234929] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234946] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234954] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234963] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234989] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.234997] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235014] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235023] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235040] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235048] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235065] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235112] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235233] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235301] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235338] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235380] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13aec00 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.235733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.593 [2024-05-15 01:25:18.236033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.593 [2024-05-15 01:25:18.236045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236b9f0 with addr=10.0.0.2, port=4420 00:22:42.593 [2024-05-15 01:25:18.236055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236b9f0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236411] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236470] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236479] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236505] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236514] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236548] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236586] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236603] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.593 [2024-05-15 01:25:18.236630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236656] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236664] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236690] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236874] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236891] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236899] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236917] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236942] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236950] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.594 [2024-05-15 01:25:18.236959] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.236968] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af0a0 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.594 [2024-05-15 01:25:18.237323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a9530 with addr=10.0.0.2, port=4420 00:22:42.594 [2024-05-15 01:25:18.237333] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9530 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237346] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236b9f0 (9): Bad file descriptor 00:22:42.594 [2024-05-15 01:25:18.237691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237715] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237756] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237790] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with [2024-05-15 01:25:18.237793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a9530 (9): the state(5) to be set 00:22:42.594 Bad file descriptor 00:22:42.594 [2024-05-15 01:25:18.237810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error [2024-05-15 01:25:18.237819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with state 00:22:42.594 the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237829] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] contr[2024-05-15 01:25:18.237829] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with oller reinitialization failed 00:22:42.594 the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237841] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with [2024-05-15 01:25:18.237841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.594 the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237851] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237894] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:42.594 [2024-05-15 01:25:18.237903] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237930] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.237992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238020] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.594 [2024-05-15 01:25:18.238055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238071] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238097] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238125] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238167] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238208] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.595 [2024-05-15 01:25:18.238225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238236] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:42.595 [2024-05-15 01:25:18.238244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238246] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:42.595 [2024-05-15 01:25:18.238253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:42.595 [2024-05-15 01:25:18.238262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af540 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.238596] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.595 [2024-05-15 01:25:18.238650] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:42.595 [2024-05-15 01:25:18.238968] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:42.595 [2024-05-15 01:25:18.239329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239361] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239379] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239387] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239414] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239554] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239563] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239590] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239598] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239616] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239625] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239755] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.595 [2024-05-15 01:25:18.239806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239823] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239840] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239858] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239867] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13af9e0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.239877] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:42.596 [2024-05-15 01:25:18.240877] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240915] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240933] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240979] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240987] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.240996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241004] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241012] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.241118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.242858] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:42.596 [2024-05-15 01:25:18.243185] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536240 (9): Bad file descriptor 00:22:42.596 [2024-05-15 01:25:18.243242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397010 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.243347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243422] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6390 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.243450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71610 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.243554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e4f0 is same with the state(5) to be set 00:22:42.596 [2024-05-15 01:25:18.243651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.596 [2024-05-15 01:25:18.243698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.596 [2024-05-15 01:25:18.243708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.597 [2024-05-15 01:25:18.243716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.243725] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e310 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.244827] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.597 [2024-05-15 01:25:18.245118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.597 [2024-05-15 01:25:18.245478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.597 [2024-05-15 01:25:18.245491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236b9f0 with addr=10.0.0.2, port=4420 00:22:42.597 [2024-05-15 01:25:18.245501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236b9f0 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.245631] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236b9f0 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.245738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.597 [2024-05-15 01:25:18.245752] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.597 [2024-05-15 01:25:18.245762] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.597 [2024-05-15 01:25:18.245854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.597 [2024-05-15 01:25:18.246278] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:42.597 [2024-05-15 01:25:18.246845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.597 [2024-05-15 01:25:18.247219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.597 [2024-05-15 01:25:18.247231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a9530 with addr=10.0.0.2, port=4420 00:22:42.597 [2024-05-15 01:25:18.247241] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9530 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.247404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a9530 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.247514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:42.597 [2024-05-15 01:25:18.247526] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:42.597 [2024-05-15 01:25:18.247536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:42.597 [2024-05-15 01:25:18.247719] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.597 [2024-05-15 01:25:18.253242] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397010 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.253271] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6390 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.253289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e71610 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.253306] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e4f0 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.253323] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e310 (9): Bad file descriptor 00:22:42.597 [2024-05-15 01:25:18.253480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253676] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253689] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.597 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253714] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12the state(5) to be set 00:22:42.597 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.597 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:12the state(5) to be set 00:22:42.597 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253794] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.597 [2024-05-15 01:25:18.253807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.597 [2024-05-15 01:25:18.253819] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.597 [2024-05-15 01:25:18.253826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253843] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253855] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.598 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253868] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:12the state(5) to be set 00:22:42.598 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.253881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253894] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253920] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253932] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253955] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.253959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.598 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253980] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.253982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12the state(5) to be set 00:22:42.598 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.253993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.253993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254041] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.254055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.598 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with [2024-05-15 01:25:18.254068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:12the state(5) to be set 00:22:42.598 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.254080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:12[2024-05-15 01:25:18.254094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13afe80 is same with the state(5) to be set 00:22:42.598 [2024-05-15 01:25:18.254116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.598 [2024-05-15 01:25:18.254429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.598 [2024-05-15 01:25:18.254440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.599 [2024-05-15 01:25:18.254801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.599 [2024-05-15 01:25:18.254811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2517f20 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255051] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255079] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255096] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255105] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255122] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255139] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255164] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255189] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255210] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255219] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255227] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255281] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255298] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255306] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255315] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255349] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255374] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.599 [2024-05-15 01:25:18.255450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255458] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255466] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255474] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255500] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255577] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b0320 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.255779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.255988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.255997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.600 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:12[2024-05-15 01:25:18.256186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.256202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:12[2024-05-15 01:25:18.256257] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.600 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256277] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.600 [2024-05-15 01:25:18.256295] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.600 [2024-05-15 01:25:18.256302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.600 [2024-05-15 01:25:18.256304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.256313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256324] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256350] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:12the state(5) to be set 00:22:42.601 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256378] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:22:42.601 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256415] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12the state(5) to be set 00:22:42.601 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256443] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256462] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:12[2024-05-15 01:25:18.256480] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256493] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:12[2024-05-15 01:25:18.256502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.256513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256524] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256533] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256542] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256551] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256560] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:12[2024-05-15 01:25:18.256569] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.256579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 [2024-05-15 01:25:18.256627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12[2024-05-15 01:25:18.256636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-05-15 01:25:18.256646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.601 the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.601 [2024-05-15 01:25:18.256659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.601 [2024-05-15 01:25:18.256666] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256674] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256683] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256701] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:12the state(5) to be set 00:22:42.602 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256730] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256747] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with [2024-05-15 01:25:18.256747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:12the state(5) to be set 00:22:42.602 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256768] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b07c0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.256771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.256982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.256993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.257120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.257131] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2365ba0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.257185] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2365ba0 was disconnected and freed. reset controller. 00:22:42.602 [2024-05-15 01:25:18.257300] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:42.602 [2024-05-15 01:25:18.258335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:42.602 [2024-05-15 01:25:18.258723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.259036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.259047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2536240 with addr=10.0.0.2, port=4420 00:22:42.602 [2024-05-15 01:25:18.259058] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2536240 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.259394] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.602 [2024-05-15 01:25:18.259409] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:42.602 [2024-05-15 01:25:18.259846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.260200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.260213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6390 with addr=10.0.0.2, port=4420 00:22:42.602 [2024-05-15 01:25:18.260222] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6390 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.260234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536240 (9): Bad file descriptor 00:22:42.602 [2024-05-15 01:25:18.261391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.261742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.261753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236b9f0 with addr=10.0.0.2, port=4420 00:22:42.602 [2024-05-15 01:25:18.261763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236b9f0 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.262116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.262545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.602 [2024-05-15 01:25:18.262556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a9530 with addr=10.0.0.2, port=4420 00:22:42.602 [2024-05-15 01:25:18.262565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9530 is same with the state(5) to be set 00:22:42.602 [2024-05-15 01:25:18.262576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6390 (9): Bad file descriptor 00:22:42.602 [2024-05-15 01:25:18.262587] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:42.602 [2024-05-15 01:25:18.262596] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:42.602 [2024-05-15 01:25:18.262606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:42.602 [2024-05-15 01:25:18.262672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.602 [2024-05-15 01:25:18.262684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.602 [2024-05-15 01:25:18.262698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.262707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.262718] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23670a0 is same with the state(5) to be set 00:22:42.603 [2024-05-15 01:25:18.262774] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23670a0 was disconnected and freed. reset controller. 00:22:42.603 [2024-05-15 01:25:18.262819] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:42.603 [2024-05-15 01:25:18.262834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.603 [2024-05-15 01:25:18.262852] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236b9f0 (9): Bad file descriptor 00:22:42.603 [2024-05-15 01:25:18.262864] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a9530 (9): Bad file descriptor 00:22:42.603 [2024-05-15 01:25:18.262874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:42.603 [2024-05-15 01:25:18.262883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:42.603 [2024-05-15 01:25:18.262891] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:42.603 [2024-05-15 01:25:18.263606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.603 [2024-05-15 01:25:18.263619] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:42.603 [2024-05-15 01:25:18.263652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2527290 (9): Bad file descriptor 00:22:42.603 [2024-05-15 01:25:18.263665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.603 [2024-05-15 01:25:18.263674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.603 [2024-05-15 01:25:18.263683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.603 [2024-05-15 01:25:18.263694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:42.603 [2024-05-15 01:25:18.263703] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:42.603 [2024-05-15 01:25:18.263711] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:42.603 [2024-05-15 01:25:18.263735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.603 [2024-05-15 01:25:18.263746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.263756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.603 [2024-05-15 01:25:18.263765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.263775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.603 [2024-05-15 01:25:18.263784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.263793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:42.603 [2024-05-15 01:25:18.263802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.263811] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ae930 is same with the state(5) to be set 00:22:42.603 [2024-05-15 01:25:18.263881] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.603 [2024-05-15 01:25:18.263890] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.603 [2024-05-15 01:25:18.263933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.263944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.263957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.263969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.263980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.263989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.603 [2024-05-15 01:25:18.264498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.603 [2024-05-15 01:25:18.264507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.264987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.264997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.265198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.265209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25193d0 is same with the state(5) to be set 00:22:42.604 [2024-05-15 01:25:18.266169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.604 [2024-05-15 01:25:18.266325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.604 [2024-05-15 01:25:18.266336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.266990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.266999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.267009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.267018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.267029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.605 [2024-05-15 01:25:18.267038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.605 [2024-05-15 01:25:18.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.606 [2024-05-15 01:25:18.267437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.606 [2024-05-15 01:25:18.267446] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x251a910 is same with the state(5) to be set 00:22:42.870 [2024-05-15 01:25:18.268388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.870 [2024-05-15 01:25:18.268402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.870 [2024-05-15 01:25:18.268414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.870 [2024-05-15 01:25:18.268424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.870 [2024-05-15 01:25:18.268435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.870 [2024-05-15 01:25:18.268444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.870 [2024-05-15 01:25:18.268455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.870 [2024-05-15 01:25:18.268465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.870 [2024-05-15 01:25:18.268476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.870 [2024-05-15 01:25:18.268485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.870 [2024-05-15 01:25:18.268496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.870 [2024-05-15 01:25:18.268505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.268985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.268995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.871 [2024-05-15 01:25:18.269299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.871 [2024-05-15 01:25:18.269308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.269666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.269675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2363270 is same with the state(5) to be set 00:22:42.872 [2024-05-15 01:25:18.270624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.270980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.270991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.271000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.271010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.271020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.271030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.271039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.271050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.872 [2024-05-15 01:25:18.271060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.872 [2024-05-15 01:25:18.271071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.873 [2024-05-15 01:25:18.271867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.873 [2024-05-15 01:25:18.271876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.874 [2024-05-15 01:25:18.271886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.874 [2024-05-15 01:25:18.271895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.874 [2024-05-15 01:25:18.271905] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23646a0 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.273061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:42.874 [2024-05-15 01:25:18.273080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:42.874 [2024-05-15 01:25:18.273091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:42.874 [2024-05-15 01:25:18.273102] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:42.874 [2024-05-15 01:25:18.273569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.273920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.273932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2527290 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.273943] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2527290 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.274411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.274832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.274845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397010 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.274855] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397010 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.275276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.275465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.275476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e310 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.275486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e310 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.275630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.276050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.276065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e4f0 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.276074] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e4f0 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.276428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.276705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.276717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e71610 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.276727] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71610 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.276739] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2527290 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.276752] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ae930 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.276775] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:42.874 [2024-05-15 01:25:18.277667] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:42.874 [2024-05-15 01:25:18.277683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:42.874 [2024-05-15 01:25:18.277693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:42.874 [2024-05-15 01:25:18.277703] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.874 [2024-05-15 01:25:18.277736] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397010 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.277748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e310 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.277759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e4f0 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.277770] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e71610 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.277780] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.277789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.277799] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:42.874 [2024-05-15 01:25:18.277851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.278298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.278653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.278666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2536240 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.278675] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2536240 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.279101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.279387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.279400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6390 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.279409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6390 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.279813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.280211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.280227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a9530 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.280236] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9530 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.280657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.280974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.874 [2024-05-15 01:25:18.280987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236b9f0 with addr=10.0.0.2, port=4420 00:22:42.874 [2024-05-15 01:25:18.280996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236b9f0 is same with the state(5) to be set 00:22:42.874 [2024-05-15 01:25:18.281005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281013] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281023] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281035] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281044] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281071] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281079] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281167] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281174] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536240 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.281198] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6390 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.281209] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a9530 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.281220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236b9f0 (9): Bad file descriptor 00:22:42.874 [2024-05-15 01:25:18.281240] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281258] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281298] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281324] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.874 [2024-05-15 01:25:18.281333] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.874 [2024-05-15 01:25:18.281342] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.874 [2024-05-15 01:25:18.281361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.281384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.874 [2024-05-15 01:25:18.283213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:42.874 [2024-05-15 01:25:18.283235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:42.875 [2024-05-15 01:25:18.283281] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:42.875 [2024-05-15 01:25:18.283296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:42.875 [2024-05-15 01:25:18.283768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.284202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.284219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e71610 with addr=10.0.0.2, port=4420 00:22:42.875 [2024-05-15 01:25:18.284231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71610 is same with the state(5) to be set 00:22:42.875 [2024-05-15 01:25:18.284662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.284941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.284959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e4f0 with addr=10.0.0.2, port=4420 00:22:42.875 [2024-05-15 01:25:18.284971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e4f0 is same with the state(5) to be set 00:22:42.875 [2024-05-15 01:25:18.285321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.285604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.285621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e310 with addr=10.0.0.2, port=4420 00:22:42.875 [2024-05-15 01:25:18.285633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e310 is same with the state(5) to be set 00:22:42.875 [2024-05-15 01:25:18.285979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.286333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.875 [2024-05-15 01:25:18.286350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397010 with addr=10.0.0.2, port=4420 00:22:42.875 [2024-05-15 01:25:18.286362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397010 is same with the state(5) to be set 00:22:42.875 [2024-05-15 01:25:18.286380] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e71610 (9): Bad file descriptor 00:22:42.875 [2024-05-15 01:25:18.286395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e4f0 (9): Bad file descriptor 00:22:42.875 [2024-05-15 01:25:18.286445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e310 (9): Bad file descriptor 00:22:42.875 [2024-05-15 01:25:18.286462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397010 (9): Bad file descriptor 00:22:42.875 [2024-05-15 01:25:18.286475] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:42.875 [2024-05-15 01:25:18.286487] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:42.875 [2024-05-15 01:25:18.286499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:42.875 [2024-05-15 01:25:18.286513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:42.875 [2024-05-15 01:25:18.286525] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:42.875 [2024-05-15 01:25:18.286536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:42.875 [2024-05-15 01:25:18.286606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.286975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.286989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.287015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.287042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.287068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.287095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.287121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.875 [2024-05-15 01:25:18.287147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.875 [2024-05-15 01:25:18.287162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.287980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.287993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.876 [2024-05-15 01:25:18.288255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.876 [2024-05-15 01:25:18.288268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.877 [2024-05-15 01:25:18.288282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-05-15 01:25:18.288294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.877 [2024-05-15 01:25:18.288309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:42.877 [2024-05-15 01:25:18.288321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:42.877 [2024-05-15 01:25:18.288334] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2492170 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.290297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:42.877 [2024-05-15 01:25:18.290323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.877 [2024-05-15 01:25:18.290334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.877 task offset: 27392 on job bdev=Nvme1n1 fails 00:22:42.877 00:22:42.877 Latency(us) 00:22:42.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.877 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme1n1 ended in about 0.63 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme1n1 : 0.63 304.77 19.05 101.59 0.00 155373.85 2844.26 173644.19 00:22:42.877 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme2n1 ended in about 0.65 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme2n1 : 0.65 195.55 12.22 97.78 0.00 210409.88 18979.23 224814.69 00:22:42.877 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme3n1 ended in about 0.66 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme3n1 : 0.66 96.25 6.02 96.25 0.00 313496.37 20237.52 261724.57 00:22:42.877 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme4n1 ended in about 0.67 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme4n1 : 0.67 191.86 11.99 95.93 0.00 204685.31 19922.94 192937.98 00:22:42.877 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme5n1 ended in about 0.67 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme5n1 : 0.67 191.22 11.95 95.61 0.00 200461.79 20027.80 205520.90 00:22:42.877 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme6n1 ended in about 0.67 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme6n1 : 0.67 95.29 5.96 95.29 0.00 294497.89 21915.24 256691.40 00:22:42.877 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme7n1 ended in about 0.66 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme7n1 : 0.66 194.79 12.17 97.40 0.00 186457.57 30198.99 172805.32 00:22:42.877 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme8n1 ended in about 0.66 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme8n1 : 0.66 286.82 17.93 3.02 0.00 182480.08 19818.09 208876.34 00:22:42.877 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme9n1 ended in about 0.69 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme9n1 : 0.69 186.02 11.63 93.01 0.00 186730.91 21705.52 191260.26 00:22:42.877 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:42.877 Job: Nvme10n1 ended in about 0.63 seconds with error 00:22:42.877 Verification LBA range: start 0x0 length 0x400 00:22:42.877 Nvme10n1 : 0.63 202.04 12.63 101.02 0.00 163591.51 3827.30 210554.06 00:22:42.877 =================================================================================================================== 00:22:42.877 Total : 1944.62 121.54 876.90 0.00 201446.04 2844.26 261724.57 00:22:42.877 [2024-05-15 01:25:18.315377] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:42.877 [2024-05-15 01:25:18.315415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:42.877 [2024-05-15 01:25:18.315455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:42.877 [2024-05-15 01:25:18.315465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:42.877 [2024-05-15 01:25:18.315476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:42.877 [2024-05-15 01:25:18.315490] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:42.877 [2024-05-15 01:25:18.315499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:42.877 [2024-05-15 01:25:18.315509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:42.877 [2024-05-15 01:25:18.315598] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.877 [2024-05-15 01:25:18.315608] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.877 [2024-05-15 01:25:18.316124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.316486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.316499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2527290 with addr=10.0.0.2, port=4420 00:22:42.877 [2024-05-15 01:25:18.316512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2527290 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.316916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.317339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.317352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ae930 with addr=10.0.0.2, port=4420 00:22:42.877 [2024-05-15 01:25:18.317362] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ae930 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.317417] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:42.877 [2024-05-15 01:25:18.317435] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:42.877 [2024-05-15 01:25:18.317447] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:42.877 [2024-05-15 01:25:18.317459] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:42.877 [2024-05-15 01:25:18.317701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:42.877 [2024-05-15 01:25:18.317715] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:42.877 [2024-05-15 01:25:18.317726] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:42.877 [2024-05-15 01:25:18.317736] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:42.877 [2024-05-15 01:25:18.317793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2527290 (9): Bad file descriptor 00:22:42.877 [2024-05-15 01:25:18.317807] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ae930 (9): Bad file descriptor 00:22:42.877 [2024-05-15 01:25:18.318090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:42.877 [2024-05-15 01:25:18.318110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:42.877 [2024-05-15 01:25:18.318121] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:42.877 [2024-05-15 01:25:18.318582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.318982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.318995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x236b9f0 with addr=10.0.0.2, port=4420 00:22:42.877 [2024-05-15 01:25:18.319005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236b9f0 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.319426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.319770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.319782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a9530 with addr=10.0.0.2, port=4420 00:22:42.877 [2024-05-15 01:25:18.319791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a9530 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.320137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.320509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.320522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c6390 with addr=10.0.0.2, port=4420 00:22:42.877 [2024-05-15 01:25:18.320531] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c6390 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.320820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.321184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.321200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2536240 with addr=10.0.0.2, port=4420 00:22:42.877 [2024-05-15 01:25:18.321209] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2536240 is same with the state(5) to be set 00:22:42.877 [2024-05-15 01:25:18.321219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:42.877 [2024-05-15 01:25:18.321228] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:42.877 [2024-05-15 01:25:18.321237] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:42.877 [2024-05-15 01:25:18.321254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:42.877 [2024-05-15 01:25:18.321263] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:42.877 [2024-05-15 01:25:18.321271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:42.877 [2024-05-15 01:25:18.321302] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:42.877 [2024-05-15 01:25:18.321326] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.877 [2024-05-15 01:25:18.321334] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.877 [2024-05-15 01:25:18.321755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.322093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.877 [2024-05-15 01:25:18.322105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e4f0 with addr=10.0.0.2, port=4420 00:22:42.878 [2024-05-15 01:25:18.322114] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e4f0 is same with the state(5) to be set 00:22:42.878 [2024-05-15 01:25:18.322463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.878 [2024-05-15 01:25:18.322753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.878 [2024-05-15 01:25:18.322765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e71610 with addr=10.0.0.2, port=4420 00:22:42.878 [2024-05-15 01:25:18.322774] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e71610 is same with the state(5) to be set 00:22:42.878 [2024-05-15 01:25:18.323143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.878 [2024-05-15 01:25:18.323489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.878 [2024-05-15 01:25:18.323515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2397010 with addr=10.0.0.2, port=4420 00:22:42.878 [2024-05-15 01:25:18.323527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2397010 is same with the state(5) to be set 00:22:42.878 [2024-05-15 01:25:18.323543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x236b9f0 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.323559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a9530 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.323573] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c6390 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.323588] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2536240 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.324020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.878 [2024-05-15 01:25:18.324481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.878 [2024-05-15 01:25:18.324499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x238e310 with addr=10.0.0.2, port=4420 00:22:42.878 [2024-05-15 01:25:18.324512] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x238e310 is same with the state(5) to be set 00:22:42.878 [2024-05-15 01:25:18.324526] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e4f0 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.324541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e71610 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.324555] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2397010 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.324569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324648] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324659] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324684] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324695] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324706] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324742] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324764] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324774] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324786] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238e310 (9): Bad file descriptor 00:22:42.878 [2024-05-15 01:25:18.324800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324811] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324822] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324884] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:42.878 [2024-05-15 01:25:18.324927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324938] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.878 [2024-05-15 01:25:18.324958] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:42.878 [2024-05-15 01:25:18.324970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:42.878 [2024-05-15 01:25:18.324981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:42.878 [2024-05-15 01:25:18.325017] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:43.138 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:22:43.138 01:25:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:22:44.077 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 4173548 00:22:44.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (4173548) - No such process 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:44.078 rmmod nvme_tcp 00:22:44.078 rmmod nvme_fabrics 00:22:44.078 rmmod nvme_keyring 00:22:44.078 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:44.339 01:25:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.304 01:25:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:46.304 00:22:46.304 real 0m8.121s 00:22:46.304 user 0m20.179s 00:22:46.304 sys 0m1.572s 00:22:46.304 01:25:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:46.304 01:25:21 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:46.304 ************************************ 00:22:46.304 END TEST nvmf_shutdown_tc3 00:22:46.304 ************************************ 00:22:46.304 01:25:21 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:22:46.304 00:22:46.304 real 0m33.388s 00:22:46.304 user 1m20.753s 00:22:46.304 sys 0m10.575s 00:22:46.304 01:25:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:46.304 01:25:21 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:46.304 ************************************ 00:22:46.304 END TEST nvmf_shutdown 00:22:46.304 ************************************ 00:22:46.304 01:25:21 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:22:46.304 01:25:21 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.304 01:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.304 01:25:21 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:22:46.304 01:25:21 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:46.304 01:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.563 01:25:21 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:22:46.563 01:25:21 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:46.563 01:25:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:46.563 01:25:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:46.563 01:25:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:46.563 ************************************ 00:22:46.563 START TEST nvmf_multicontroller 00:22:46.563 ************************************ 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:46.563 * Looking for test storage... 00:22:46.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.563 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:22:46.564 01:25:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:53.133 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:53.133 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:53.133 Found net devices under 0000:af:00.0: cvl_0_0 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:53.133 Found net devices under 0000:af:00.1: cvl_0_1 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:53.133 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:53.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:53.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:22:53.392 00:22:53.392 --- 10.0.0.2 ping statistics --- 00:22:53.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.392 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:53.392 01:25:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:53.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:53.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:53.392 00:22:53.392 --- 10.0.0.1 ping statistics --- 00:22:53.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:53.392 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:53.392 01:25:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=4178083 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 4178083 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4178083 ']' 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:53.393 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:53.651 [2024-05-15 01:25:29.094509] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:53.651 [2024-05-15 01:25:29.094554] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.651 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.651 [2024-05-15 01:25:29.167272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.651 [2024-05-15 01:25:29.237924] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.651 [2024-05-15 01:25:29.237964] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.652 [2024-05-15 01:25:29.237973] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.652 [2024-05-15 01:25:29.237981] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.652 [2024-05-15 01:25:29.237988] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.652 [2024-05-15 01:25:29.238096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.652 [2024-05-15 01:25:29.238183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:53.652 [2024-05-15 01:25:29.238184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.218 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:54.218 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:54.218 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.219 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:54.219 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.477 [2024-05-15 01:25:29.937575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.477 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.477 Malloc0 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 [2024-05-15 01:25:29.998529] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:54.478 [2024-05-15 01:25:29.998798] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 [2024-05-15 01:25:30.006683] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 Malloc1 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=4178160 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 4178160 /var/tmp/bdevperf.sock 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 4178160 ']' 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:54.478 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.414 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:55.414 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:22:55.414 01:25:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:55.414 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.414 01:25:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.674 NVMe0n1 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.674 1 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.674 request: 00:22:55.674 { 00:22:55.674 "name": "NVMe0", 00:22:55.674 "trtype": "tcp", 00:22:55.674 "traddr": "10.0.0.2", 00:22:55.674 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:55.674 "hostaddr": "10.0.0.2", 00:22:55.674 "hostsvcid": "60000", 00:22:55.674 "adrfam": "ipv4", 00:22:55.674 "trsvcid": "4420", 00:22:55.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.674 "method": "bdev_nvme_attach_controller", 00:22:55.674 "req_id": 1 00:22:55.674 } 00:22:55.674 Got JSON-RPC error response 00:22:55.674 response: 00:22:55.674 { 00:22:55.674 "code": -114, 00:22:55.674 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:55.674 } 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.674 request: 00:22:55.674 { 00:22:55.674 "name": "NVMe0", 00:22:55.674 "trtype": "tcp", 00:22:55.674 "traddr": "10.0.0.2", 00:22:55.674 "hostaddr": "10.0.0.2", 00:22:55.674 "hostsvcid": "60000", 00:22:55.674 "adrfam": "ipv4", 00:22:55.674 "trsvcid": "4420", 00:22:55.674 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:55.674 "method": "bdev_nvme_attach_controller", 00:22:55.674 "req_id": 1 00:22:55.674 } 00:22:55.674 Got JSON-RPC error response 00:22:55.674 response: 00:22:55.674 { 00:22:55.674 "code": -114, 00:22:55.674 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:55.674 } 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:55.674 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.675 request: 00:22:55.675 { 00:22:55.675 "name": "NVMe0", 00:22:55.675 "trtype": "tcp", 00:22:55.675 "traddr": "10.0.0.2", 00:22:55.675 "hostaddr": "10.0.0.2", 00:22:55.675 "hostsvcid": "60000", 00:22:55.675 "adrfam": "ipv4", 00:22:55.675 "trsvcid": "4420", 00:22:55.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.675 "multipath": "disable", 00:22:55.675 "method": "bdev_nvme_attach_controller", 00:22:55.675 "req_id": 1 00:22:55.675 } 00:22:55.675 Got JSON-RPC error response 00:22:55.675 response: 00:22:55.675 { 00:22:55.675 "code": -114, 00:22:55.675 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:55.675 } 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.675 request: 00:22:55.675 { 00:22:55.675 "name": "NVMe0", 00:22:55.675 "trtype": "tcp", 00:22:55.675 "traddr": "10.0.0.2", 00:22:55.675 "hostaddr": "10.0.0.2", 00:22:55.675 "hostsvcid": "60000", 00:22:55.675 "adrfam": "ipv4", 00:22:55.675 "trsvcid": "4420", 00:22:55.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.675 "multipath": "failover", 00:22:55.675 "method": "bdev_nvme_attach_controller", 00:22:55.675 "req_id": 1 00:22:55.675 } 00:22:55.675 Got JSON-RPC error response 00:22:55.675 response: 00:22:55.675 { 00:22:55.675 "code": -114, 00:22:55.675 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:55.675 } 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.675 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:55.934 01:25:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:57.312 0 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 4178160 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4178160 ']' 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4178160 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4178160 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4178160' 00:22:57.312 killing process with pid 4178160 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4178160 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4178160 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:57.312 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:22:57.313 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:57.313 [2024-05-15 01:25:30.110594] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:22:57.313 [2024-05-15 01:25:30.110649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4178160 ] 00:22:57.313 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.313 [2024-05-15 01:25:30.182038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.313 [2024-05-15 01:25:30.254621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.313 [2024-05-15 01:25:31.496132] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 28466306-784d-4438-ad2f-487ce3c89f25 already exists 00:22:57.313 [2024-05-15 01:25:31.496164] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:28466306-784d-4438-ad2f-487ce3c89f25 alias for bdev NVMe1n1 00:22:57.313 [2024-05-15 01:25:31.496176] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:57.313 Running I/O for 1 seconds... 00:22:57.313 00:22:57.313 Latency(us) 00:22:57.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.313 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:57.313 NVMe0n1 : 1.01 25337.58 98.97 0.00 0.00 5039.47 3080.19 22649.24 00:22:57.313 =================================================================================================================== 00:22:57.313 Total : 25337.58 98.97 0.00 0.00 5039.47 3080.19 22649.24 00:22:57.313 Received shutdown signal, test time was about 1.000000 seconds 00:22:57.313 00:22:57.313 Latency(us) 00:22:57.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.313 =================================================================================================================== 00:22:57.313 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.313 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:57.313 rmmod nvme_tcp 00:22:57.313 rmmod nvme_fabrics 00:22:57.313 rmmod nvme_keyring 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:22:57.313 01:25:32 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:22:57.313 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 4178083 ']' 00:22:57.313 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 4178083 00:22:57.313 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 4178083 ']' 00:22:57.313 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 4178083 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4178083 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4178083' 00:22:57.572 killing process with pid 4178083 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 4178083 00:22:57.572 [2024-05-15 01:25:33.059548] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:57.572 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 4178083 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.831 01:25:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.737 01:25:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:59.737 00:22:59.737 real 0m13.333s 00:22:59.737 user 0m17.005s 00:22:59.737 sys 0m6.155s 00:22:59.737 01:25:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:59.737 01:25:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:59.737 ************************************ 00:22:59.737 END TEST nvmf_multicontroller 00:22:59.737 ************************************ 00:22:59.996 01:25:35 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:59.996 01:25:35 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:59.996 01:25:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:59.996 01:25:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:59.996 ************************************ 00:22:59.996 START TEST nvmf_aer 00:22:59.996 ************************************ 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:59.996 * Looking for test storage... 00:22:59.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:59.996 01:25:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.562 01:25:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:06.562 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:06.562 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:06.562 Found net devices under 0000:af:00.0: cvl_0_0 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:06.562 Found net devices under 0000:af:00.1: cvl_0_1 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.562 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:23:06.821 00:23:06.821 --- 10.0.0.2 ping statistics --- 00:23:06.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.821 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:23:06.821 00:23:06.821 --- 10.0.0.1 ping statistics --- 00:23:06.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.821 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.821 01:25:42 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=4182350 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 4182350 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 4182350 ']' 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:06.822 01:25:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:06.822 [2024-05-15 01:25:42.407915] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:23:06.822 [2024-05-15 01:25:42.407961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.822 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.822 [2024-05-15 01:25:42.481359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:07.080 [2024-05-15 01:25:42.553311] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.080 [2024-05-15 01:25:42.553351] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.080 [2024-05-15 01:25:42.553360] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.080 [2024-05-15 01:25:42.553369] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.080 [2024-05-15 01:25:42.553376] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.080 [2024-05-15 01:25:42.553424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.080 [2024-05-15 01:25:42.553522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.081 [2024-05-15 01:25:42.553583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:07.081 [2024-05-15 01:25:42.553585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 [2024-05-15 01:25:43.264015] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 Malloc0 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 [2024-05-15 01:25:43.318382] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:07.648 [2024-05-15 01:25:43.318642] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.648 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:07.648 [ 00:23:07.648 { 00:23:07.648 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:07.648 "subtype": "Discovery", 00:23:07.648 "listen_addresses": [], 00:23:07.648 "allow_any_host": true, 00:23:07.648 "hosts": [] 00:23:07.648 }, 00:23:07.648 { 00:23:07.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.648 "subtype": "NVMe", 00:23:07.648 "listen_addresses": [ 00:23:07.648 { 00:23:07.648 "trtype": "TCP", 00:23:07.648 "adrfam": "IPv4", 00:23:07.648 "traddr": "10.0.0.2", 00:23:07.648 "trsvcid": "4420" 00:23:07.648 } 00:23:07.648 ], 00:23:07.648 "allow_any_host": true, 00:23:07.648 "hosts": [], 00:23:07.648 "serial_number": "SPDK00000000000001", 00:23:07.648 "model_number": "SPDK bdev Controller", 00:23:07.648 "max_namespaces": 2, 00:23:07.648 "min_cntlid": 1, 00:23:07.648 "max_cntlid": 65519, 00:23:07.648 "namespaces": [ 00:23:07.648 { 00:23:07.648 "nsid": 1, 00:23:07.648 "bdev_name": "Malloc0", 00:23:07.648 "name": "Malloc0", 00:23:07.648 "nguid": "DC7D7A0C245146C59319A61248894C58", 00:23:07.648 "uuid": "dc7d7a0c-2451-46c5-9319-a61248894c58" 00:23:07.648 } 00:23:07.648 ] 00:23:07.648 } 00:23:07.648 ] 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=4182635 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:23:07.912 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:07.913 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=3 00:23:07.913 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.172 Malloc1 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.172 [ 00:23:08.172 { 00:23:08.172 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:08.172 "subtype": "Discovery", 00:23:08.172 "listen_addresses": [], 00:23:08.172 "allow_any_host": true, 00:23:08.172 "hosts": [] 00:23:08.172 }, 00:23:08.172 { 00:23:08.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:08.172 "subtype": "NVMe", 00:23:08.172 "listen_addresses": [ 00:23:08.172 { 00:23:08.172 "trtype": "TCP", 00:23:08.172 "adrfam": "IPv4", 00:23:08.172 "traddr": "10.0.0.2", 00:23:08.172 "trsvcid": "4420" 00:23:08.172 } 00:23:08.172 ], 00:23:08.172 "allow_any_host": true, 00:23:08.172 "hosts": [], 00:23:08.172 "serial_number": "SPDK00000000000001", 00:23:08.172 "model_number": "SPDK bdev Controller", 00:23:08.172 "max_namespaces": 2, 00:23:08.172 Asynchronous Event Request test 00:23:08.172 Attaching to 10.0.0.2 00:23:08.172 Attached to 10.0.0.2 00:23:08.172 Registering asynchronous event callbacks... 00:23:08.172 Starting namespace attribute notice tests for all controllers... 00:23:08.172 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:08.172 aer_cb - Changed Namespace 00:23:08.172 Cleaning up... 00:23:08.172 "min_cntlid": 1, 00:23:08.172 "max_cntlid": 65519, 00:23:08.172 "namespaces": [ 00:23:08.172 { 00:23:08.172 "nsid": 1, 00:23:08.172 "bdev_name": "Malloc0", 00:23:08.172 "name": "Malloc0", 00:23:08.172 "nguid": "DC7D7A0C245146C59319A61248894C58", 00:23:08.172 "uuid": "dc7d7a0c-2451-46c5-9319-a61248894c58" 00:23:08.172 }, 00:23:08.172 { 00:23:08.172 "nsid": 2, 00:23:08.172 "bdev_name": "Malloc1", 00:23:08.172 "name": "Malloc1", 00:23:08.172 "nguid": "A45B2D93F53E441C90483AD96EF40B25", 00:23:08.172 "uuid": "a45b2d93-f53e-441c-9048-3ad96ef40b25" 00:23:08.172 } 00:23:08.172 ] 00:23:08.172 } 00:23:08.172 ] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 4182635 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.172 rmmod nvme_tcp 00:23:08.172 rmmod nvme_fabrics 00:23:08.172 rmmod nvme_keyring 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 4182350 ']' 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 4182350 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 4182350 ']' 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 4182350 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.172 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4182350 00:23:08.431 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:08.431 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:08.431 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4182350' 00:23:08.431 killing process with pid 4182350 00:23:08.431 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 4182350 00:23:08.431 [2024-05-15 01:25:43.905560] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:08.431 01:25:43 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 4182350 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:08.431 01:25:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.971 01:25:46 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:10.971 00:23:10.971 real 0m10.710s 00:23:10.971 user 0m7.955s 00:23:10.971 sys 0m5.714s 00:23:10.971 01:25:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:10.971 01:25:46 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:10.971 ************************************ 00:23:10.971 END TEST nvmf_aer 00:23:10.971 ************************************ 00:23:10.971 01:25:46 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:10.971 01:25:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:10.971 01:25:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:10.971 01:25:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:10.971 ************************************ 00:23:10.971 START TEST nvmf_async_init 00:23:10.971 ************************************ 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:10.971 * Looking for test storage... 00:23:10.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2aa1e329afdf4cb8b5e9e94f3a2e1a8a 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:10.971 01:25:46 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:17.543 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:17.543 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:17.543 Found net devices under 0000:af:00.0: cvl_0_0 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.543 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:17.544 Found net devices under 0000:af:00.1: cvl_0_1 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.544 01:25:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:23:17.544 00:23:17.544 --- 10.0.0.2 ping statistics --- 00:23:17.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.544 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:23:17.544 00:23:17.544 --- 10.0.0.1 ping statistics --- 00:23:17.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.544 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=4186337 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 4186337 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 4186337 ']' 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:17.544 01:25:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:17.804 [2024-05-15 01:25:53.251345] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:23:17.804 [2024-05-15 01:25:53.251390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.804 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.804 [2024-05-15 01:25:53.323332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.804 [2024-05-15 01:25:53.390457] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.804 [2024-05-15 01:25:53.390513] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.804 [2024-05-15 01:25:53.390523] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.804 [2024-05-15 01:25:53.390533] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.804 [2024-05-15 01:25:53.390540] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.804 [2024-05-15 01:25:53.390565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.371 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:18.371 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:23:18.371 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.371 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:18.371 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 [2024-05-15 01:25:54.093116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 null0 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2aa1e329afdf4cb8b5e9e94f3a2e1a8a 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.630 [2024-05-15 01:25:54.137175] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:18.630 [2024-05-15 01:25:54.137398] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.630 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 nvme0n1 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 [ 00:23:18.890 { 00:23:18.890 "name": "nvme0n1", 00:23:18.890 "aliases": [ 00:23:18.890 "2aa1e329-afdf-4cb8-b5e9-e94f3a2e1a8a" 00:23:18.890 ], 00:23:18.890 "product_name": "NVMe disk", 00:23:18.890 "block_size": 512, 00:23:18.890 "num_blocks": 2097152, 00:23:18.890 "uuid": "2aa1e329-afdf-4cb8-b5e9-e94f3a2e1a8a", 00:23:18.890 "assigned_rate_limits": { 00:23:18.890 "rw_ios_per_sec": 0, 00:23:18.890 "rw_mbytes_per_sec": 0, 00:23:18.890 "r_mbytes_per_sec": 0, 00:23:18.890 "w_mbytes_per_sec": 0 00:23:18.890 }, 00:23:18.890 "claimed": false, 00:23:18.890 "zoned": false, 00:23:18.890 "supported_io_types": { 00:23:18.890 "read": true, 00:23:18.890 "write": true, 00:23:18.890 "unmap": false, 00:23:18.890 "write_zeroes": true, 00:23:18.890 "flush": true, 00:23:18.890 "reset": true, 00:23:18.890 "compare": true, 00:23:18.890 "compare_and_write": true, 00:23:18.890 "abort": true, 00:23:18.890 "nvme_admin": true, 00:23:18.890 "nvme_io": true 00:23:18.890 }, 00:23:18.890 "memory_domains": [ 00:23:18.890 { 00:23:18.890 "dma_device_id": "system", 00:23:18.890 "dma_device_type": 1 00:23:18.890 } 00:23:18.890 ], 00:23:18.890 "driver_specific": { 00:23:18.890 "nvme": [ 00:23:18.890 { 00:23:18.890 "trid": { 00:23:18.890 "trtype": "TCP", 00:23:18.890 "adrfam": "IPv4", 00:23:18.890 "traddr": "10.0.0.2", 00:23:18.890 "trsvcid": "4420", 00:23:18.890 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.890 }, 00:23:18.890 "ctrlr_data": { 00:23:18.890 "cntlid": 1, 00:23:18.890 "vendor_id": "0x8086", 00:23:18.890 "model_number": "SPDK bdev Controller", 00:23:18.890 "serial_number": "00000000000000000000", 00:23:18.890 "firmware_revision": "24.05", 00:23:18.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:18.890 "oacs": { 00:23:18.890 "security": 0, 00:23:18.890 "format": 0, 00:23:18.890 "firmware": 0, 00:23:18.890 "ns_manage": 0 00:23:18.890 }, 00:23:18.890 "multi_ctrlr": true, 00:23:18.890 "ana_reporting": false 00:23:18.890 }, 00:23:18.890 "vs": { 00:23:18.890 "nvme_version": "1.3" 00:23:18.890 }, 00:23:18.890 "ns_data": { 00:23:18.890 "id": 1, 00:23:18.890 "can_share": true 00:23:18.890 } 00:23:18.890 } 00:23:18.890 ], 00:23:18.890 "mp_policy": "active_passive" 00:23:18.890 } 00:23:18.890 } 00:23:18.890 ] 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 [2024-05-15 01:25:54.413927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:18.890 [2024-05-15 01:25:54.413986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x954e70 (9): Bad file descriptor 00:23:18.890 [2024-05-15 01:25:54.556273] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.890 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 [ 00:23:18.890 { 00:23:18.890 "name": "nvme0n1", 00:23:18.890 "aliases": [ 00:23:18.890 "2aa1e329-afdf-4cb8-b5e9-e94f3a2e1a8a" 00:23:18.890 ], 00:23:18.890 "product_name": "NVMe disk", 00:23:18.890 "block_size": 512, 00:23:18.890 "num_blocks": 2097152, 00:23:18.890 "uuid": "2aa1e329-afdf-4cb8-b5e9-e94f3a2e1a8a", 00:23:18.890 "assigned_rate_limits": { 00:23:18.890 "rw_ios_per_sec": 0, 00:23:18.890 "rw_mbytes_per_sec": 0, 00:23:18.890 "r_mbytes_per_sec": 0, 00:23:18.890 "w_mbytes_per_sec": 0 00:23:18.890 }, 00:23:18.890 "claimed": false, 00:23:18.890 "zoned": false, 00:23:18.890 "supported_io_types": { 00:23:18.890 "read": true, 00:23:18.890 "write": true, 00:23:18.890 "unmap": false, 00:23:18.890 "write_zeroes": true, 00:23:18.890 "flush": true, 00:23:18.890 "reset": true, 00:23:18.890 "compare": true, 00:23:18.890 "compare_and_write": true, 00:23:18.890 "abort": true, 00:23:18.890 "nvme_admin": true, 00:23:18.890 "nvme_io": true 00:23:18.890 }, 00:23:18.890 "memory_domains": [ 00:23:18.890 { 00:23:18.890 "dma_device_id": "system", 00:23:18.890 "dma_device_type": 1 00:23:18.890 } 00:23:18.890 ], 00:23:18.890 "driver_specific": { 00:23:18.890 "nvme": [ 00:23:18.890 { 00:23:18.890 "trid": { 00:23:18.890 "trtype": "TCP", 00:23:18.890 "adrfam": "IPv4", 00:23:18.890 "traddr": "10.0.0.2", 00:23:18.890 "trsvcid": "4420", 00:23:18.890 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:18.890 }, 00:23:18.890 "ctrlr_data": { 00:23:18.890 "cntlid": 2, 00:23:18.890 "vendor_id": "0x8086", 00:23:18.890 "model_number": "SPDK bdev Controller", 00:23:18.890 "serial_number": "00000000000000000000", 00:23:18.890 "firmware_revision": "24.05", 00:23:18.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:19.149 "oacs": { 00:23:19.149 "security": 0, 00:23:19.149 "format": 0, 00:23:19.149 "firmware": 0, 00:23:19.149 "ns_manage": 0 00:23:19.149 }, 00:23:19.149 "multi_ctrlr": true, 00:23:19.149 "ana_reporting": false 00:23:19.149 }, 00:23:19.149 "vs": { 00:23:19.149 "nvme_version": "1.3" 00:23:19.149 }, 00:23:19.149 "ns_data": { 00:23:19.149 "id": 1, 00:23:19.149 "can_share": true 00:23:19.149 } 00:23:19.149 } 00:23:19.149 ], 00:23:19.149 "mp_policy": "active_passive" 00:23:19.149 } 00:23:19.149 } 00:23:19.149 ] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KZ3KsPVIdH 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KZ3KsPVIdH 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.149 [2024-05-15 01:25:54.630589] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.149 [2024-05-15 01:25:54.630721] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KZ3KsPVIdH 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.149 [2024-05-15 01:25:54.638606] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KZ3KsPVIdH 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.149 [2024-05-15 01:25:54.650636] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:19.149 [2024-05-15 01:25:54.650674] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:19.149 nvme0n1 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.149 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.149 [ 00:23:19.149 { 00:23:19.149 "name": "nvme0n1", 00:23:19.149 "aliases": [ 00:23:19.150 "2aa1e329-afdf-4cb8-b5e9-e94f3a2e1a8a" 00:23:19.150 ], 00:23:19.150 "product_name": "NVMe disk", 00:23:19.150 "block_size": 512, 00:23:19.150 "num_blocks": 2097152, 00:23:19.150 "uuid": "2aa1e329-afdf-4cb8-b5e9-e94f3a2e1a8a", 00:23:19.150 "assigned_rate_limits": { 00:23:19.150 "rw_ios_per_sec": 0, 00:23:19.150 "rw_mbytes_per_sec": 0, 00:23:19.150 "r_mbytes_per_sec": 0, 00:23:19.150 "w_mbytes_per_sec": 0 00:23:19.150 }, 00:23:19.150 "claimed": false, 00:23:19.150 "zoned": false, 00:23:19.150 "supported_io_types": { 00:23:19.150 "read": true, 00:23:19.150 "write": true, 00:23:19.150 "unmap": false, 00:23:19.150 "write_zeroes": true, 00:23:19.150 "flush": true, 00:23:19.150 "reset": true, 00:23:19.150 "compare": true, 00:23:19.150 "compare_and_write": true, 00:23:19.150 "abort": true, 00:23:19.150 "nvme_admin": true, 00:23:19.150 "nvme_io": true 00:23:19.150 }, 00:23:19.150 "memory_domains": [ 00:23:19.150 { 00:23:19.150 "dma_device_id": "system", 00:23:19.150 "dma_device_type": 1 00:23:19.150 } 00:23:19.150 ], 00:23:19.150 "driver_specific": { 00:23:19.150 "nvme": [ 00:23:19.150 { 00:23:19.150 "trid": { 00:23:19.150 "trtype": "TCP", 00:23:19.150 "adrfam": "IPv4", 00:23:19.150 "traddr": "10.0.0.2", 00:23:19.150 "trsvcid": "4421", 00:23:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:19.150 }, 00:23:19.150 "ctrlr_data": { 00:23:19.150 "cntlid": 3, 00:23:19.150 "vendor_id": "0x8086", 00:23:19.150 "model_number": "SPDK bdev Controller", 00:23:19.150 "serial_number": "00000000000000000000", 00:23:19.150 "firmware_revision": "24.05", 00:23:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:19.150 "oacs": { 00:23:19.150 "security": 0, 00:23:19.150 "format": 0, 00:23:19.150 "firmware": 0, 00:23:19.150 "ns_manage": 0 00:23:19.150 }, 00:23:19.150 "multi_ctrlr": true, 00:23:19.150 "ana_reporting": false 00:23:19.150 }, 00:23:19.150 "vs": { 00:23:19.150 "nvme_version": "1.3" 00:23:19.150 }, 00:23:19.150 "ns_data": { 00:23:19.150 "id": 1, 00:23:19.150 "can_share": true 00:23:19.150 } 00:23:19.150 } 00:23:19.150 ], 00:23:19.150 "mp_policy": "active_passive" 00:23:19.150 } 00:23:19.150 } 00:23:19.150 ] 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KZ3KsPVIdH 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.150 rmmod nvme_tcp 00:23:19.150 rmmod nvme_fabrics 00:23:19.150 rmmod nvme_keyring 00:23:19.150 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 4186337 ']' 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 4186337 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 4186337 ']' 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 4186337 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4186337 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4186337' 00:23:19.409 killing process with pid 4186337 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 4186337 00:23:19.409 [2024-05-15 01:25:54.901670] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:19.409 [2024-05-15 01:25:54.901694] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:19.409 [2024-05-15 01:25:54.901705] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:19.409 01:25:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 4186337 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.409 01:25:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.939 01:25:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.939 00:23:21.939 real 0m10.895s 00:23:21.939 user 0m3.846s 00:23:21.939 sys 0m5.679s 00:23:21.939 01:25:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:21.939 01:25:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.939 ************************************ 00:23:21.939 END TEST nvmf_async_init 00:23:21.939 ************************************ 00:23:21.939 01:25:57 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:21.939 01:25:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:21.939 01:25:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:21.939 01:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.939 ************************************ 00:23:21.939 START TEST dma 00:23:21.939 ************************************ 00:23:21.939 01:25:57 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:21.939 * Looking for test storage... 00:23:21.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.939 01:25:57 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.939 01:25:57 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.939 01:25:57 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.939 01:25:57 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.939 01:25:57 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.939 01:25:57 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.939 01:25:57 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.939 01:25:57 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:23:21.939 01:25:57 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:23:21.939 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.940 01:25:57 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.940 01:25:57 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:21.940 01:25:57 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:23:21.940 00:23:21.940 real 0m0.146s 00:23:21.940 user 0m0.066s 00:23:21.940 sys 0m0.091s 00:23:21.940 01:25:57 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:21.940 01:25:57 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:23:21.940 ************************************ 00:23:21.940 END TEST dma 00:23:21.940 ************************************ 00:23:21.940 01:25:57 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:21.940 01:25:57 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:21.940 01:25:57 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:21.940 01:25:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.940 ************************************ 00:23:21.940 START TEST nvmf_identify 00:23:21.940 ************************************ 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:21.940 * Looking for test storage... 00:23:21.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.940 01:25:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.198 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.198 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.198 01:25:57 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.198 01:25:57 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:28.758 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:28.758 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:28.758 Found net devices under 0000:af:00.0: cvl_0_0 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:28.758 Found net devices under 0000:af:00.1: cvl_0_1 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.758 01:26:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:28.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:23:28.759 00:23:28.759 --- 10.0.0.2 ping statistics --- 00:23:28.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.759 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:23:28.759 00:23:28.759 --- 10.0.0.1 ping statistics --- 00:23:28.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.759 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=4190349 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 4190349 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 4190349 ']' 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:28.759 01:26:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:28.759 [2024-05-15 01:26:04.269691] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:23:28.759 [2024-05-15 01:26:04.269736] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.759 [2024-05-15 01:26:04.343385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.759 [2024-05-15 01:26:04.422045] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.759 [2024-05-15 01:26:04.422081] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.759 [2024-05-15 01:26:04.422091] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.759 [2024-05-15 01:26:04.422099] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.759 [2024-05-15 01:26:04.422125] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.759 [2024-05-15 01:26:04.422175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.759 [2024-05-15 01:26:04.422271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.759 [2024-05-15 01:26:04.422290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.759 [2024-05-15 01:26:04.422292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.698 [2024-05-15 01:26:05.085856] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.698 Malloc0 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.698 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.699 [2024-05-15 01:26:05.184548] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:29.699 [2024-05-15 01:26:05.184800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.699 [ 00:23:29.699 { 00:23:29.699 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:29.699 "subtype": "Discovery", 00:23:29.699 "listen_addresses": [ 00:23:29.699 { 00:23:29.699 "trtype": "TCP", 00:23:29.699 "adrfam": "IPv4", 00:23:29.699 "traddr": "10.0.0.2", 00:23:29.699 "trsvcid": "4420" 00:23:29.699 } 00:23:29.699 ], 00:23:29.699 "allow_any_host": true, 00:23:29.699 "hosts": [] 00:23:29.699 }, 00:23:29.699 { 00:23:29.699 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.699 "subtype": "NVMe", 00:23:29.699 "listen_addresses": [ 00:23:29.699 { 00:23:29.699 "trtype": "TCP", 00:23:29.699 "adrfam": "IPv4", 00:23:29.699 "traddr": "10.0.0.2", 00:23:29.699 "trsvcid": "4420" 00:23:29.699 } 00:23:29.699 ], 00:23:29.699 "allow_any_host": true, 00:23:29.699 "hosts": [], 00:23:29.699 "serial_number": "SPDK00000000000001", 00:23:29.699 "model_number": "SPDK bdev Controller", 00:23:29.699 "max_namespaces": 32, 00:23:29.699 "min_cntlid": 1, 00:23:29.699 "max_cntlid": 65519, 00:23:29.699 "namespaces": [ 00:23:29.699 { 00:23:29.699 "nsid": 1, 00:23:29.699 "bdev_name": "Malloc0", 00:23:29.699 "name": "Malloc0", 00:23:29.699 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:29.699 "eui64": "ABCDEF0123456789", 00:23:29.699 "uuid": "d5b30bdb-0c56-4423-94a2-376c21015c80" 00:23:29.699 } 00:23:29.699 ] 00:23:29.699 } 00:23:29.699 ] 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.699 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:29.699 [2024-05-15 01:26:05.243123] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:23:29.699 [2024-05-15 01:26:05.243165] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4190552 ] 00:23:29.699 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.699 [2024-05-15 01:26:05.274566] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:29.699 [2024-05-15 01:26:05.274608] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:29.699 [2024-05-15 01:26:05.274614] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:29.699 [2024-05-15 01:26:05.274626] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:29.699 [2024-05-15 01:26:05.274636] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:29.699 [2024-05-15 01:26:05.275134] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:29.699 [2024-05-15 01:26:05.275161] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x247aca0 0 00:23:29.699 [2024-05-15 01:26:05.289199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:29.699 [2024-05-15 01:26:05.289223] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:29.699 [2024-05-15 01:26:05.289229] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:29.699 [2024-05-15 01:26:05.289234] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:29.699 [2024-05-15 01:26:05.289276] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.289282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.289287] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.699 [2024-05-15 01:26:05.289302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:29.699 [2024-05-15 01:26:05.289320] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.699 [2024-05-15 01:26:05.297203] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.699 [2024-05-15 01:26:05.297214] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.699 [2024-05-15 01:26:05.297219] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297225] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.699 [2024-05-15 01:26:05.297236] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:29.699 [2024-05-15 01:26:05.297244] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:29.699 [2024-05-15 01:26:05.297250] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:29.699 [2024-05-15 01:26:05.297266] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297271] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297276] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.699 [2024-05-15 01:26:05.297284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.699 [2024-05-15 01:26:05.297299] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.699 [2024-05-15 01:26:05.297537] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.699 [2024-05-15 01:26:05.297546] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.699 [2024-05-15 01:26:05.297551] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297556] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.699 [2024-05-15 01:26:05.297563] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:29.699 [2024-05-15 01:26:05.297573] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:29.699 [2024-05-15 01:26:05.297581] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297586] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297591] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.699 [2024-05-15 01:26:05.297600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.699 [2024-05-15 01:26:05.297613] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.699 [2024-05-15 01:26:05.297734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.699 [2024-05-15 01:26:05.297742] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.699 [2024-05-15 01:26:05.297747] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297751] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.699 [2024-05-15 01:26:05.297759] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:29.699 [2024-05-15 01:26:05.297769] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:29.699 [2024-05-15 01:26:05.297777] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297782] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297787] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.699 [2024-05-15 01:26:05.297794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.699 [2024-05-15 01:26:05.297807] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.699 [2024-05-15 01:26:05.297926] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.699 [2024-05-15 01:26:05.297933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.699 [2024-05-15 01:26:05.297938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.699 [2024-05-15 01:26:05.297950] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:29.699 [2024-05-15 01:26:05.297961] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297967] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.297974] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.699 [2024-05-15 01:26:05.297982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.699 [2024-05-15 01:26:05.297994] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.699 [2024-05-15 01:26:05.298282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.699 [2024-05-15 01:26:05.298289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.699 [2024-05-15 01:26:05.298293] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.699 [2024-05-15 01:26:05.298298] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.699 [2024-05-15 01:26:05.298305] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:29.699 [2024-05-15 01:26:05.298311] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:29.699 [2024-05-15 01:26:05.298321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:29.699 [2024-05-15 01:26:05.298428] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:29.699 [2024-05-15 01:26:05.298435] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:29.700 [2024-05-15 01:26:05.298444] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298454] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.298461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.700 [2024-05-15 01:26:05.298474] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.700 [2024-05-15 01:26:05.298594] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.700 [2024-05-15 01:26:05.298602] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.700 [2024-05-15 01:26:05.298607] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298611] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.700 [2024-05-15 01:26:05.298618] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:29.700 [2024-05-15 01:26:05.298630] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298635] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298639] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.298647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.700 [2024-05-15 01:26:05.298659] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.700 [2024-05-15 01:26:05.298777] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.700 [2024-05-15 01:26:05.298784] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.700 [2024-05-15 01:26:05.298789] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298794] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.700 [2024-05-15 01:26:05.298801] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:29.700 [2024-05-15 01:26:05.298807] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:29.700 [2024-05-15 01:26:05.298820] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:29.700 [2024-05-15 01:26:05.298831] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:29.700 [2024-05-15 01:26:05.298841] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.298846] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.298853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.700 [2024-05-15 01:26:05.298866] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.700 [2024-05-15 01:26:05.299019] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.700 [2024-05-15 01:26:05.299026] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.700 [2024-05-15 01:26:05.299031] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.299036] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247aca0): datao=0, datal=4096, cccid=0 00:23:29.700 [2024-05-15 01:26:05.299042] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e4980) on tqpair(0x247aca0): expected_datao=0, payload_size=4096 00:23:29.700 [2024-05-15 01:26:05.299048] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.299235] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.299241] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340385] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.700 [2024-05-15 01:26:05.340397] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.700 [2024-05-15 01:26:05.340402] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340408] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.700 [2024-05-15 01:26:05.340418] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:29.700 [2024-05-15 01:26:05.340425] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:29.700 [2024-05-15 01:26:05.340431] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:29.700 [2024-05-15 01:26:05.340437] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:29.700 [2024-05-15 01:26:05.340443] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:29.700 [2024-05-15 01:26:05.340450] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:29.700 [2024-05-15 01:26:05.340465] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:29.700 [2024-05-15 01:26:05.340476] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.340494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.700 [2024-05-15 01:26:05.340509] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.700 [2024-05-15 01:26:05.340633] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.700 [2024-05-15 01:26:05.340641] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.700 [2024-05-15 01:26:05.340648] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340653] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4980) on tqpair=0x247aca0 00:23:29.700 [2024-05-15 01:26:05.340666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340671] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340675] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.340682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.700 [2024-05-15 01:26:05.340690] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340694] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340699] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.340705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.700 [2024-05-15 01:26:05.340713] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340717] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340722] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.340729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.700 [2024-05-15 01:26:05.340736] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340741] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340745] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.340752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.700 [2024-05-15 01:26:05.340758] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:29.700 [2024-05-15 01:26:05.340768] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:29.700 [2024-05-15 01:26:05.340776] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.340781] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.340788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.700 [2024-05-15 01:26:05.340802] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4980, cid 0, qid 0 00:23:29.700 [2024-05-15 01:26:05.340809] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4ae0, cid 1, qid 0 00:23:29.700 [2024-05-15 01:26:05.340814] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4c40, cid 2, qid 0 00:23:29.700 [2024-05-15 01:26:05.340820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.700 [2024-05-15 01:26:05.340825] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4f00, cid 4, qid 0 00:23:29.700 [2024-05-15 01:26:05.341134] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.700 [2024-05-15 01:26:05.341141] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.700 [2024-05-15 01:26:05.341146] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.341150] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4f00) on tqpair=0x247aca0 00:23:29.700 [2024-05-15 01:26:05.341160] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:29.700 [2024-05-15 01:26:05.341169] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:29.700 [2024-05-15 01:26:05.341181] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.341187] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247aca0) 00:23:29.700 [2024-05-15 01:26:05.345200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.700 [2024-05-15 01:26:05.345214] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4f00, cid 4, qid 0 00:23:29.700 [2024-05-15 01:26:05.345358] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.700 [2024-05-15 01:26:05.345366] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.700 [2024-05-15 01:26:05.345371] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.345376] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247aca0): datao=0, datal=4096, cccid=4 00:23:29.700 [2024-05-15 01:26:05.345382] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e4f00) on tqpair(0x247aca0): expected_datao=0, payload_size=4096 00:23:29.700 [2024-05-15 01:26:05.345388] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.345396] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.345401] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.700 [2024-05-15 01:26:05.345606] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.700 [2024-05-15 01:26:05.345613] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.701 [2024-05-15 01:26:05.345618] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345622] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4f00) on tqpair=0x247aca0 00:23:29.701 [2024-05-15 01:26:05.345638] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:29.701 [2024-05-15 01:26:05.345666] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345672] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247aca0) 00:23:29.701 [2024-05-15 01:26:05.345679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.701 [2024-05-15 01:26:05.345687] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345692] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345697] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x247aca0) 00:23:29.701 [2024-05-15 01:26:05.345704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.701 [2024-05-15 01:26:05.345720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4f00, cid 4, qid 0 00:23:29.701 [2024-05-15 01:26:05.345726] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e5060, cid 5, qid 0 00:23:29.701 [2024-05-15 01:26:05.345889] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.701 [2024-05-15 01:26:05.345897] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.701 [2024-05-15 01:26:05.345902] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345906] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247aca0): datao=0, datal=1024, cccid=4 00:23:29.701 [2024-05-15 01:26:05.345912] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e4f00) on tqpair(0x247aca0): expected_datao=0, payload_size=1024 00:23:29.701 [2024-05-15 01:26:05.345918] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345926] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345931] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345942] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.701 [2024-05-15 01:26:05.345948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.701 [2024-05-15 01:26:05.345953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.345958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e5060) on tqpair=0x247aca0 00:23:29.701 [2024-05-15 01:26:05.386398] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.701 [2024-05-15 01:26:05.386414] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.701 [2024-05-15 01:26:05.386420] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386426] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4f00) on tqpair=0x247aca0 00:23:29.701 [2024-05-15 01:26:05.386440] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386446] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247aca0) 00:23:29.701 [2024-05-15 01:26:05.386455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.701 [2024-05-15 01:26:05.386476] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4f00, cid 4, qid 0 00:23:29.701 [2024-05-15 01:26:05.386611] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.701 [2024-05-15 01:26:05.386619] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.701 [2024-05-15 01:26:05.386623] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386628] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247aca0): datao=0, datal=3072, cccid=4 00:23:29.701 [2024-05-15 01:26:05.386635] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e4f00) on tqpair(0x247aca0): expected_datao=0, payload_size=3072 00:23:29.701 [2024-05-15 01:26:05.386641] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386651] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386657] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386862] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.701 [2024-05-15 01:26:05.386869] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.701 [2024-05-15 01:26:05.386874] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386879] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4f00) on tqpair=0x247aca0 00:23:29.701 [2024-05-15 01:26:05.386889] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.701 [2024-05-15 01:26:05.386894] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x247aca0) 00:23:29.701 [2024-05-15 01:26:05.386901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.701 [2024-05-15 01:26:05.386919] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4f00, cid 4, qid 0 00:23:29.966 [2024-05-15 01:26:05.391196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.966 [2024-05-15 01:26:05.391205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.966 [2024-05-15 01:26:05.391210] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.966 [2024-05-15 01:26:05.391215] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x247aca0): datao=0, datal=8, cccid=4 00:23:29.966 [2024-05-15 01:26:05.391221] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24e4f00) on tqpair(0x247aca0): expected_datao=0, payload_size=8 00:23:29.966 [2024-05-15 01:26:05.391227] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.966 [2024-05-15 01:26:05.391235] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.966 [2024-05-15 01:26:05.391240] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.966 [2024-05-15 01:26:05.431206] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.966 [2024-05-15 01:26:05.431220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.966 [2024-05-15 01:26:05.431225] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.966 [2024-05-15 01:26:05.431230] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4f00) on tqpair=0x247aca0 00:23:29.966 ===================================================== 00:23:29.966 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:29.966 ===================================================== 00:23:29.966 Controller Capabilities/Features 00:23:29.966 ================================ 00:23:29.966 Vendor ID: 0000 00:23:29.966 Subsystem Vendor ID: 0000 00:23:29.966 Serial Number: .................... 00:23:29.966 Model Number: ........................................ 00:23:29.966 Firmware Version: 24.05 00:23:29.966 Recommended Arb Burst: 0 00:23:29.966 IEEE OUI Identifier: 00 00 00 00:23:29.966 Multi-path I/O 00:23:29.966 May have multiple subsystem ports: No 00:23:29.966 May have multiple controllers: No 00:23:29.966 Associated with SR-IOV VF: No 00:23:29.966 Max Data Transfer Size: 131072 00:23:29.966 Max Number of Namespaces: 0 00:23:29.966 Max Number of I/O Queues: 1024 00:23:29.966 NVMe Specification Version (VS): 1.3 00:23:29.966 NVMe Specification Version (Identify): 1.3 00:23:29.966 Maximum Queue Entries: 128 00:23:29.966 Contiguous Queues Required: Yes 00:23:29.966 Arbitration Mechanisms Supported 00:23:29.966 Weighted Round Robin: Not Supported 00:23:29.966 Vendor Specific: Not Supported 00:23:29.966 Reset Timeout: 15000 ms 00:23:29.966 Doorbell Stride: 4 bytes 00:23:29.966 NVM Subsystem Reset: Not Supported 00:23:29.966 Command Sets Supported 00:23:29.966 NVM Command Set: Supported 00:23:29.966 Boot Partition: Not Supported 00:23:29.966 Memory Page Size Minimum: 4096 bytes 00:23:29.966 Memory Page Size Maximum: 4096 bytes 00:23:29.966 Persistent Memory Region: Not Supported 00:23:29.966 Optional Asynchronous Events Supported 00:23:29.966 Namespace Attribute Notices: Not Supported 00:23:29.966 Firmware Activation Notices: Not Supported 00:23:29.966 ANA Change Notices: Not Supported 00:23:29.966 PLE Aggregate Log Change Notices: Not Supported 00:23:29.966 LBA Status Info Alert Notices: Not Supported 00:23:29.966 EGE Aggregate Log Change Notices: Not Supported 00:23:29.966 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.966 Zone Descriptor Change Notices: Not Supported 00:23:29.966 Discovery Log Change Notices: Supported 00:23:29.966 Controller Attributes 00:23:29.966 128-bit Host Identifier: Not Supported 00:23:29.966 Non-Operational Permissive Mode: Not Supported 00:23:29.966 NVM Sets: Not Supported 00:23:29.966 Read Recovery Levels: Not Supported 00:23:29.966 Endurance Groups: Not Supported 00:23:29.966 Predictable Latency Mode: Not Supported 00:23:29.966 Traffic Based Keep ALive: Not Supported 00:23:29.966 Namespace Granularity: Not Supported 00:23:29.966 SQ Associations: Not Supported 00:23:29.966 UUID List: Not Supported 00:23:29.966 Multi-Domain Subsystem: Not Supported 00:23:29.966 Fixed Capacity Management: Not Supported 00:23:29.966 Variable Capacity Management: Not Supported 00:23:29.966 Delete Endurance Group: Not Supported 00:23:29.966 Delete NVM Set: Not Supported 00:23:29.966 Extended LBA Formats Supported: Not Supported 00:23:29.966 Flexible Data Placement Supported: Not Supported 00:23:29.966 00:23:29.966 Controller Memory Buffer Support 00:23:29.966 ================================ 00:23:29.966 Supported: No 00:23:29.966 00:23:29.966 Persistent Memory Region Support 00:23:29.966 ================================ 00:23:29.966 Supported: No 00:23:29.966 00:23:29.966 Admin Command Set Attributes 00:23:29.966 ============================ 00:23:29.966 Security Send/Receive: Not Supported 00:23:29.966 Format NVM: Not Supported 00:23:29.966 Firmware Activate/Download: Not Supported 00:23:29.966 Namespace Management: Not Supported 00:23:29.966 Device Self-Test: Not Supported 00:23:29.966 Directives: Not Supported 00:23:29.966 NVMe-MI: Not Supported 00:23:29.966 Virtualization Management: Not Supported 00:23:29.966 Doorbell Buffer Config: Not Supported 00:23:29.966 Get LBA Status Capability: Not Supported 00:23:29.966 Command & Feature Lockdown Capability: Not Supported 00:23:29.966 Abort Command Limit: 1 00:23:29.966 Async Event Request Limit: 4 00:23:29.966 Number of Firmware Slots: N/A 00:23:29.966 Firmware Slot 1 Read-Only: N/A 00:23:29.966 Firmware Activation Without Reset: N/A 00:23:29.966 Multiple Update Detection Support: N/A 00:23:29.966 Firmware Update Granularity: No Information Provided 00:23:29.966 Per-Namespace SMART Log: No 00:23:29.966 Asymmetric Namespace Access Log Page: Not Supported 00:23:29.966 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:29.966 Command Effects Log Page: Not Supported 00:23:29.966 Get Log Page Extended Data: Supported 00:23:29.966 Telemetry Log Pages: Not Supported 00:23:29.966 Persistent Event Log Pages: Not Supported 00:23:29.966 Supported Log Pages Log Page: May Support 00:23:29.966 Commands Supported & Effects Log Page: Not Supported 00:23:29.966 Feature Identifiers & Effects Log Page:May Support 00:23:29.966 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.966 Data Area 4 for Telemetry Log: Not Supported 00:23:29.966 Error Log Page Entries Supported: 128 00:23:29.966 Keep Alive: Not Supported 00:23:29.966 00:23:29.966 NVM Command Set Attributes 00:23:29.966 ========================== 00:23:29.966 Submission Queue Entry Size 00:23:29.966 Max: 1 00:23:29.966 Min: 1 00:23:29.966 Completion Queue Entry Size 00:23:29.966 Max: 1 00:23:29.966 Min: 1 00:23:29.966 Number of Namespaces: 0 00:23:29.966 Compare Command: Not Supported 00:23:29.966 Write Uncorrectable Command: Not Supported 00:23:29.966 Dataset Management Command: Not Supported 00:23:29.966 Write Zeroes Command: Not Supported 00:23:29.966 Set Features Save Field: Not Supported 00:23:29.966 Reservations: Not Supported 00:23:29.966 Timestamp: Not Supported 00:23:29.966 Copy: Not Supported 00:23:29.966 Volatile Write Cache: Not Present 00:23:29.966 Atomic Write Unit (Normal): 1 00:23:29.966 Atomic Write Unit (PFail): 1 00:23:29.966 Atomic Compare & Write Unit: 1 00:23:29.966 Fused Compare & Write: Supported 00:23:29.966 Scatter-Gather List 00:23:29.966 SGL Command Set: Supported 00:23:29.966 SGL Keyed: Supported 00:23:29.966 SGL Bit Bucket Descriptor: Not Supported 00:23:29.966 SGL Metadata Pointer: Not Supported 00:23:29.966 Oversized SGL: Not Supported 00:23:29.966 SGL Metadata Address: Not Supported 00:23:29.966 SGL Offset: Supported 00:23:29.966 Transport SGL Data Block: Not Supported 00:23:29.966 Replay Protected Memory Block: Not Supported 00:23:29.966 00:23:29.966 Firmware Slot Information 00:23:29.966 ========================= 00:23:29.966 Active slot: 0 00:23:29.966 00:23:29.966 00:23:29.966 Error Log 00:23:29.966 ========= 00:23:29.966 00:23:29.966 Active Namespaces 00:23:29.966 ================= 00:23:29.966 Discovery Log Page 00:23:29.966 ================== 00:23:29.966 Generation Counter: 2 00:23:29.966 Number of Records: 2 00:23:29.966 Record Format: 0 00:23:29.966 00:23:29.966 Discovery Log Entry 0 00:23:29.966 ---------------------- 00:23:29.966 Transport Type: 3 (TCP) 00:23:29.966 Address Family: 1 (IPv4) 00:23:29.966 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:29.966 Entry Flags: 00:23:29.966 Duplicate Returned Information: 1 00:23:29.967 Explicit Persistent Connection Support for Discovery: 1 00:23:29.967 Transport Requirements: 00:23:29.967 Secure Channel: Not Required 00:23:29.967 Port ID: 0 (0x0000) 00:23:29.967 Controller ID: 65535 (0xffff) 00:23:29.967 Admin Max SQ Size: 128 00:23:29.967 Transport Service Identifier: 4420 00:23:29.967 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:29.967 Transport Address: 10.0.0.2 00:23:29.967 Discovery Log Entry 1 00:23:29.967 ---------------------- 00:23:29.967 Transport Type: 3 (TCP) 00:23:29.967 Address Family: 1 (IPv4) 00:23:29.967 Subsystem Type: 2 (NVM Subsystem) 00:23:29.967 Entry Flags: 00:23:29.967 Duplicate Returned Information: 0 00:23:29.967 Explicit Persistent Connection Support for Discovery: 0 00:23:29.967 Transport Requirements: 00:23:29.967 Secure Channel: Not Required 00:23:29.967 Port ID: 0 (0x0000) 00:23:29.967 Controller ID: 65535 (0xffff) 00:23:29.967 Admin Max SQ Size: 128 00:23:29.967 Transport Service Identifier: 4420 00:23:29.967 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:29.967 Transport Address: 10.0.0.2 [2024-05-15 01:26:05.431315] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:29.967 [2024-05-15 01:26:05.431329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.967 [2024-05-15 01:26:05.431337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.967 [2024-05-15 01:26:05.431345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.967 [2024-05-15 01:26:05.431352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.967 [2024-05-15 01:26:05.431361] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431366] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431371] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.431379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.431395] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.431544] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.431552] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.431557] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431562] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.431571] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431576] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431581] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.431588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.431605] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.431751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.431758] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.431763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.431775] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:29.967 [2024-05-15 01:26:05.431781] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:29.967 [2024-05-15 01:26:05.431791] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431801] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.431808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.431820] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.431939] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.431948] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.431953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431958] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.431971] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431976] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.431981] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.431988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.432000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.432125] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.432132] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.432137] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432142] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.432153] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.432170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.432182] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.432300] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.432308] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.432312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432317] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.432330] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432335] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.432347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.432360] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.432485] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.432492] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.432496] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432501] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.432512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432522] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.432529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.432541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.432660] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.432667] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.432674] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432679] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.432691] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432696] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432701] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.432708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.432720] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.432839] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.432846] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.432851] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432856] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.432867] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432873] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.432877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.967 [2024-05-15 01:26:05.432884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.967 [2024-05-15 01:26:05.432896] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.967 [2024-05-15 01:26:05.433016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.967 [2024-05-15 01:26:05.433023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.967 [2024-05-15 01:26:05.433028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.433033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.967 [2024-05-15 01:26:05.433044] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.433049] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.967 [2024-05-15 01:26:05.433054] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.433061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.433073] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.433200] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.433208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.433213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433217] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.433229] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433234] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433239] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.433246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.433258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.433376] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.433383] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.433388] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433395] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.433408] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433413] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433417] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.433424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.433436] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.433559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.433566] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.433570] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433575] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.433587] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433592] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433597] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.433604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.433615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.433739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.433746] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.433751] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433755] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.433767] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433772] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.433777] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.433784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.433795] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.434025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.434035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.434040] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434045] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.434058] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434063] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434068] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.434076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.434089] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.434211] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.434219] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.434223] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434228] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.434243] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434249] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.434261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.434273] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.434393] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.434400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.434405] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434409] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.434421] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434426] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434431] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.434438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.434450] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.434573] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.434580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.434585] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434590] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.434601] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434607] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434611] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.434618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.434630] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.434752] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.434759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.434764] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434769] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.434780] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434785] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434790] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.434797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.434809] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.434927] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.434934] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.434939] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434944] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.434958] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434963] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.434968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.434975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.434986] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.435116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.435123] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.435127] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.435132] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.435144] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.435149] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.435153] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.435160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.435172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.968 [2024-05-15 01:26:05.439201] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.968 [2024-05-15 01:26:05.439211] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.968 [2024-05-15 01:26:05.439215] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.439220] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.968 [2024-05-15 01:26:05.439233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.439238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.968 [2024-05-15 01:26:05.439243] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x247aca0) 00:23:29.968 [2024-05-15 01:26:05.439251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.968 [2024-05-15 01:26:05.439265] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24e4da0, cid 3, qid 0 00:23:29.969 [2024-05-15 01:26:05.439500] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.439507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.439512] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.439517] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24e4da0) on tqpair=0x247aca0 00:23:29.969 [2024-05-15 01:26:05.439527] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:23:29.969 00:23:29.969 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:29.969 [2024-05-15 01:26:05.476876] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:23:29.969 [2024-05-15 01:26:05.476915] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid4190639 ] 00:23:29.969 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.969 [2024-05-15 01:26:05.506011] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:29.969 [2024-05-15 01:26:05.506056] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:29.969 [2024-05-15 01:26:05.506062] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:29.969 [2024-05-15 01:26:05.506074] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:29.969 [2024-05-15 01:26:05.506083] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:29.969 [2024-05-15 01:26:05.506562] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:29.969 [2024-05-15 01:26:05.506584] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12b2ca0 0 00:23:29.969 [2024-05-15 01:26:05.521196] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:29.969 [2024-05-15 01:26:05.521216] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:29.969 [2024-05-15 01:26:05.521222] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:29.969 [2024-05-15 01:26:05.521226] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:29.969 [2024-05-15 01:26:05.521261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.521267] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.521272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.521284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:29.969 [2024-05-15 01:26:05.521301] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.529199] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.529208] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.529213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.529230] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:29.969 [2024-05-15 01:26:05.529236] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:29.969 [2024-05-15 01:26:05.529243] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:29.969 [2024-05-15 01:26:05.529254] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529260] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529264] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.529273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.969 [2024-05-15 01:26:05.529287] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.529499] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.529508] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.529513] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529518] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.529526] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:29.969 [2024-05-15 01:26:05.529536] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:29.969 [2024-05-15 01:26:05.529545] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529553] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529558] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.529566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.969 [2024-05-15 01:26:05.529580] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.529712] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.529719] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.529724] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529729] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.529736] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:29.969 [2024-05-15 01:26:05.529746] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:29.969 [2024-05-15 01:26:05.529754] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529759] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.529771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.969 [2024-05-15 01:26:05.529783] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.529926] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.529933] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.529938] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.529950] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:29.969 [2024-05-15 01:26:05.529960] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529965] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.529970] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.529977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.969 [2024-05-15 01:26:05.529990] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.530127] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.530134] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.530138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530143] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.530150] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:29.969 [2024-05-15 01:26:05.530156] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:29.969 [2024-05-15 01:26:05.530166] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:29.969 [2024-05-15 01:26:05.530273] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:29.969 [2024-05-15 01:26:05.530278] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:29.969 [2024-05-15 01:26:05.530290] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530295] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530299] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.530307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.969 [2024-05-15 01:26:05.530319] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.530443] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.530450] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.530455] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530459] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.530467] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:29.969 [2024-05-15 01:26:05.530478] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530488] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.969 [2024-05-15 01:26:05.530495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.969 [2024-05-15 01:26:05.530507] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.969 [2024-05-15 01:26:05.530657] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.969 [2024-05-15 01:26:05.530664] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.969 [2024-05-15 01:26:05.530668] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.969 [2024-05-15 01:26:05.530673] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.969 [2024-05-15 01:26:05.530680] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:29.969 [2024-05-15 01:26:05.530686] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.530695] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:29.970 [2024-05-15 01:26:05.530709] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.530718] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.530723] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.530731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.970 [2024-05-15 01:26:05.530743] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.970 [2024-05-15 01:26:05.530895] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.970 [2024-05-15 01:26:05.530903] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.970 [2024-05-15 01:26:05.530908] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.530912] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=4096, cccid=0 00:23:29.970 [2024-05-15 01:26:05.530919] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131c980) on tqpair(0x12b2ca0): expected_datao=0, payload_size=4096 00:23:29.970 [2024-05-15 01:26:05.530927] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.530935] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.530940] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531016] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.970 [2024-05-15 01:26:05.531023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.970 [2024-05-15 01:26:05.531028] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531033] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.970 [2024-05-15 01:26:05.531042] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:29.970 [2024-05-15 01:26:05.531048] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:29.970 [2024-05-15 01:26:05.531054] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:29.970 [2024-05-15 01:26:05.531059] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:29.970 [2024-05-15 01:26:05.531065] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:29.970 [2024-05-15 01:26:05.531071] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531095] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531100] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531105] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531113] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.970 [2024-05-15 01:26:05.531126] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.970 [2024-05-15 01:26:05.531256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.970 [2024-05-15 01:26:05.531264] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.970 [2024-05-15 01:26:05.531268] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531273] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131c980) on tqpair=0x12b2ca0 00:23:29.970 [2024-05-15 01:26:05.531284] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531289] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531294] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.970 [2024-05-15 01:26:05.531308] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531318] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.970 [2024-05-15 01:26:05.531331] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531336] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531340] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.970 [2024-05-15 01:26:05.531355] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531360] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531365] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.970 [2024-05-15 01:26:05.531377] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531388] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531395] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531400] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.970 [2024-05-15 01:26:05.531422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131c980, cid 0, qid 0 00:23:29.970 [2024-05-15 01:26:05.531428] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cae0, cid 1, qid 0 00:23:29.970 [2024-05-15 01:26:05.531433] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cc40, cid 2, qid 0 00:23:29.970 [2024-05-15 01:26:05.531439] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.970 [2024-05-15 01:26:05.531444] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.970 [2024-05-15 01:26:05.531617] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.970 [2024-05-15 01:26:05.531624] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.970 [2024-05-15 01:26:05.531629] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531634] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.970 [2024-05-15 01:26:05.531643] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:29.970 [2024-05-15 01:26:05.531650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531660] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531667] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531675] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.970 [2024-05-15 01:26:05.531705] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.970 [2024-05-15 01:26:05.531825] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.970 [2024-05-15 01:26:05.531832] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.970 [2024-05-15 01:26:05.531837] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531842] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.970 [2024-05-15 01:26:05.531886] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531900] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:29.970 [2024-05-15 01:26:05.531909] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.970 [2024-05-15 01:26:05.531914] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.970 [2024-05-15 01:26:05.531921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.531934] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.971 [2024-05-15 01:26:05.532070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.971 [2024-05-15 01:26:05.532078] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.971 [2024-05-15 01:26:05.532082] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532087] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=4096, cccid=4 00:23:29.971 [2024-05-15 01:26:05.532093] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131cf00) on tqpair(0x12b2ca0): expected_datao=0, payload_size=4096 00:23:29.971 [2024-05-15 01:26:05.532099] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532106] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532111] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532178] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.532185] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.532189] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532200] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.532214] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:29.971 [2024-05-15 01:26:05.532223] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.532235] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.532244] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.532256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.532271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.971 [2024-05-15 01:26:05.532454] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.971 [2024-05-15 01:26:05.532461] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.971 [2024-05-15 01:26:05.532466] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532471] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=4096, cccid=4 00:23:29.971 [2024-05-15 01:26:05.532477] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131cf00) on tqpair(0x12b2ca0): expected_datao=0, payload_size=4096 00:23:29.971 [2024-05-15 01:26:05.532483] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532490] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532495] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.532703] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.532708] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532717] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.532729] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.532740] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.532748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532753] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.532761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.532774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.971 [2024-05-15 01:26:05.532910] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.971 [2024-05-15 01:26:05.532917] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.971 [2024-05-15 01:26:05.532922] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532927] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=4096, cccid=4 00:23:29.971 [2024-05-15 01:26:05.532933] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131cf00) on tqpair(0x12b2ca0): expected_datao=0, payload_size=4096 00:23:29.971 [2024-05-15 01:26:05.532939] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532946] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.532951] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.533017] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.533024] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.533029] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.533034] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.533046] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.533057] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.533066] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.533074] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.533080] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.533087] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:29.971 [2024-05-15 01:26:05.533093] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:29.971 [2024-05-15 01:26:05.533099] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:29.971 [2024-05-15 01:26:05.533117] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.533122] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.533130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.533137] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.533144] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.533149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.533155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:29.971 [2024-05-15 01:26:05.533172] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.971 [2024-05-15 01:26:05.533178] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d060, cid 5, qid 0 00:23:29.971 [2024-05-15 01:26:05.537197] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.537205] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.537210] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537214] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.537223] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.537230] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.537234] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537239] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d060) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.537251] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537256] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.537263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.537276] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d060, cid 5, qid 0 00:23:29.971 [2024-05-15 01:26:05.537515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.537522] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.537527] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537532] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d060) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.537545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537550] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.537557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.537569] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d060, cid 5, qid 0 00:23:29.971 [2024-05-15 01:26:05.537729] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.537737] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.537741] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537746] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d060) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.537758] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537763] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.537770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.537782] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d060, cid 5, qid 0 00:23:29.971 [2024-05-15 01:26:05.537900] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.971 [2024-05-15 01:26:05.537907] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.971 [2024-05-15 01:26:05.537912] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537919] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d060) on tqpair=0x12b2ca0 00:23:29.971 [2024-05-15 01:26:05.537934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537939] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12b2ca0) 00:23:29.971 [2024-05-15 01:26:05.537946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.971 [2024-05-15 01:26:05.537954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.971 [2024-05-15 01:26:05.537959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12b2ca0) 00:23:29.972 [2024-05-15 01:26:05.537966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.972 [2024-05-15 01:26:05.537974] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.537979] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x12b2ca0) 00:23:29.972 [2024-05-15 01:26:05.537985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.972 [2024-05-15 01:26:05.537996] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538001] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12b2ca0) 00:23:29.972 [2024-05-15 01:26:05.538008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.972 [2024-05-15 01:26:05.538022] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d060, cid 5, qid 0 00:23:29.972 [2024-05-15 01:26:05.538028] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cf00, cid 4, qid 0 00:23:29.972 [2024-05-15 01:26:05.538033] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d1c0, cid 6, qid 0 00:23:29.972 [2024-05-15 01:26:05.538039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d320, cid 7, qid 0 00:23:29.972 [2024-05-15 01:26:05.538393] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.972 [2024-05-15 01:26:05.538400] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.972 [2024-05-15 01:26:05.538405] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538410] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=8192, cccid=5 00:23:29.972 [2024-05-15 01:26:05.538416] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131d060) on tqpair(0x12b2ca0): expected_datao=0, payload_size=8192 00:23:29.972 [2024-05-15 01:26:05.538422] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538826] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538831] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.972 [2024-05-15 01:26:05.538843] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.972 [2024-05-15 01:26:05.538848] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538852] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=512, cccid=4 00:23:29.972 [2024-05-15 01:26:05.538858] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131cf00) on tqpair(0x12b2ca0): expected_datao=0, payload_size=512 00:23:29.972 [2024-05-15 01:26:05.538864] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538871] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538875] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538882] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.972 [2024-05-15 01:26:05.538890] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.972 [2024-05-15 01:26:05.538895] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538899] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=512, cccid=6 00:23:29.972 [2024-05-15 01:26:05.538905] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131d1c0) on tqpair(0x12b2ca0): expected_datao=0, payload_size=512 00:23:29.972 [2024-05-15 01:26:05.538911] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538918] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538922] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538928] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:29.972 [2024-05-15 01:26:05.538935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:29.972 [2024-05-15 01:26:05.538939] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538944] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12b2ca0): datao=0, datal=4096, cccid=7 00:23:29.972 [2024-05-15 01:26:05.538950] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x131d320) on tqpair(0x12b2ca0): expected_datao=0, payload_size=4096 00:23:29.972 [2024-05-15 01:26:05.538955] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538962] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.538967] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.539074] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.972 [2024-05-15 01:26:05.539081] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.972 [2024-05-15 01:26:05.539086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.539090] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d060) on tqpair=0x12b2ca0 00:23:29.972 [2024-05-15 01:26:05.539105] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.972 [2024-05-15 01:26:05.539111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.972 [2024-05-15 01:26:05.539116] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.539120] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cf00) on tqpair=0x12b2ca0 00:23:29.972 [2024-05-15 01:26:05.539131] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.972 [2024-05-15 01:26:05.539137] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.972 [2024-05-15 01:26:05.539142] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.539147] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d1c0) on tqpair=0x12b2ca0 00:23:29.972 [2024-05-15 01:26:05.539157] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.972 [2024-05-15 01:26:05.539163] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.972 [2024-05-15 01:26:05.539168] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.972 [2024-05-15 01:26:05.539173] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d320) on tqpair=0x12b2ca0 00:23:29.972 ===================================================== 00:23:29.972 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.972 ===================================================== 00:23:29.972 Controller Capabilities/Features 00:23:29.972 ================================ 00:23:29.972 Vendor ID: 8086 00:23:29.972 Subsystem Vendor ID: 8086 00:23:29.972 Serial Number: SPDK00000000000001 00:23:29.972 Model Number: SPDK bdev Controller 00:23:29.972 Firmware Version: 24.05 00:23:29.972 Recommended Arb Burst: 6 00:23:29.972 IEEE OUI Identifier: e4 d2 5c 00:23:29.972 Multi-path I/O 00:23:29.972 May have multiple subsystem ports: Yes 00:23:29.972 May have multiple controllers: Yes 00:23:29.972 Associated with SR-IOV VF: No 00:23:29.972 Max Data Transfer Size: 131072 00:23:29.972 Max Number of Namespaces: 32 00:23:29.972 Max Number of I/O Queues: 127 00:23:29.972 NVMe Specification Version (VS): 1.3 00:23:29.972 NVMe Specification Version (Identify): 1.3 00:23:29.972 Maximum Queue Entries: 128 00:23:29.972 Contiguous Queues Required: Yes 00:23:29.972 Arbitration Mechanisms Supported 00:23:29.972 Weighted Round Robin: Not Supported 00:23:29.972 Vendor Specific: Not Supported 00:23:29.972 Reset Timeout: 15000 ms 00:23:29.972 Doorbell Stride: 4 bytes 00:23:29.972 NVM Subsystem Reset: Not Supported 00:23:29.972 Command Sets Supported 00:23:29.972 NVM Command Set: Supported 00:23:29.972 Boot Partition: Not Supported 00:23:29.972 Memory Page Size Minimum: 4096 bytes 00:23:29.972 Memory Page Size Maximum: 4096 bytes 00:23:29.972 Persistent Memory Region: Not Supported 00:23:29.972 Optional Asynchronous Events Supported 00:23:29.972 Namespace Attribute Notices: Supported 00:23:29.972 Firmware Activation Notices: Not Supported 00:23:29.972 ANA Change Notices: Not Supported 00:23:29.972 PLE Aggregate Log Change Notices: Not Supported 00:23:29.972 LBA Status Info Alert Notices: Not Supported 00:23:29.972 EGE Aggregate Log Change Notices: Not Supported 00:23:29.972 Normal NVM Subsystem Shutdown event: Not Supported 00:23:29.972 Zone Descriptor Change Notices: Not Supported 00:23:29.972 Discovery Log Change Notices: Not Supported 00:23:29.972 Controller Attributes 00:23:29.972 128-bit Host Identifier: Supported 00:23:29.972 Non-Operational Permissive Mode: Not Supported 00:23:29.972 NVM Sets: Not Supported 00:23:29.972 Read Recovery Levels: Not Supported 00:23:29.972 Endurance Groups: Not Supported 00:23:29.972 Predictable Latency Mode: Not Supported 00:23:29.972 Traffic Based Keep ALive: Not Supported 00:23:29.972 Namespace Granularity: Not Supported 00:23:29.972 SQ Associations: Not Supported 00:23:29.972 UUID List: Not Supported 00:23:29.972 Multi-Domain Subsystem: Not Supported 00:23:29.972 Fixed Capacity Management: Not Supported 00:23:29.972 Variable Capacity Management: Not Supported 00:23:29.972 Delete Endurance Group: Not Supported 00:23:29.972 Delete NVM Set: Not Supported 00:23:29.972 Extended LBA Formats Supported: Not Supported 00:23:29.972 Flexible Data Placement Supported: Not Supported 00:23:29.972 00:23:29.972 Controller Memory Buffer Support 00:23:29.972 ================================ 00:23:29.972 Supported: No 00:23:29.972 00:23:29.972 Persistent Memory Region Support 00:23:29.972 ================================ 00:23:29.972 Supported: No 00:23:29.972 00:23:29.972 Admin Command Set Attributes 00:23:29.972 ============================ 00:23:29.972 Security Send/Receive: Not Supported 00:23:29.972 Format NVM: Not Supported 00:23:29.972 Firmware Activate/Download: Not Supported 00:23:29.972 Namespace Management: Not Supported 00:23:29.972 Device Self-Test: Not Supported 00:23:29.972 Directives: Not Supported 00:23:29.972 NVMe-MI: Not Supported 00:23:29.972 Virtualization Management: Not Supported 00:23:29.972 Doorbell Buffer Config: Not Supported 00:23:29.972 Get LBA Status Capability: Not Supported 00:23:29.972 Command & Feature Lockdown Capability: Not Supported 00:23:29.972 Abort Command Limit: 4 00:23:29.972 Async Event Request Limit: 4 00:23:29.972 Number of Firmware Slots: N/A 00:23:29.972 Firmware Slot 1 Read-Only: N/A 00:23:29.972 Firmware Activation Without Reset: N/A 00:23:29.972 Multiple Update Detection Support: N/A 00:23:29.973 Firmware Update Granularity: No Information Provided 00:23:29.973 Per-Namespace SMART Log: No 00:23:29.973 Asymmetric Namespace Access Log Page: Not Supported 00:23:29.973 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:29.973 Command Effects Log Page: Supported 00:23:29.973 Get Log Page Extended Data: Supported 00:23:29.973 Telemetry Log Pages: Not Supported 00:23:29.973 Persistent Event Log Pages: Not Supported 00:23:29.973 Supported Log Pages Log Page: May Support 00:23:29.973 Commands Supported & Effects Log Page: Not Supported 00:23:29.973 Feature Identifiers & Effects Log Page:May Support 00:23:29.973 NVMe-MI Commands & Effects Log Page: May Support 00:23:29.973 Data Area 4 for Telemetry Log: Not Supported 00:23:29.973 Error Log Page Entries Supported: 128 00:23:29.973 Keep Alive: Supported 00:23:29.973 Keep Alive Granularity: 10000 ms 00:23:29.973 00:23:29.973 NVM Command Set Attributes 00:23:29.973 ========================== 00:23:29.973 Submission Queue Entry Size 00:23:29.973 Max: 64 00:23:29.973 Min: 64 00:23:29.973 Completion Queue Entry Size 00:23:29.973 Max: 16 00:23:29.973 Min: 16 00:23:29.973 Number of Namespaces: 32 00:23:29.973 Compare Command: Supported 00:23:29.973 Write Uncorrectable Command: Not Supported 00:23:29.973 Dataset Management Command: Supported 00:23:29.973 Write Zeroes Command: Supported 00:23:29.973 Set Features Save Field: Not Supported 00:23:29.973 Reservations: Supported 00:23:29.973 Timestamp: Not Supported 00:23:29.973 Copy: Supported 00:23:29.973 Volatile Write Cache: Present 00:23:29.973 Atomic Write Unit (Normal): 1 00:23:29.973 Atomic Write Unit (PFail): 1 00:23:29.973 Atomic Compare & Write Unit: 1 00:23:29.973 Fused Compare & Write: Supported 00:23:29.973 Scatter-Gather List 00:23:29.973 SGL Command Set: Supported 00:23:29.973 SGL Keyed: Supported 00:23:29.973 SGL Bit Bucket Descriptor: Not Supported 00:23:29.973 SGL Metadata Pointer: Not Supported 00:23:29.973 Oversized SGL: Not Supported 00:23:29.973 SGL Metadata Address: Not Supported 00:23:29.973 SGL Offset: Supported 00:23:29.973 Transport SGL Data Block: Not Supported 00:23:29.973 Replay Protected Memory Block: Not Supported 00:23:29.973 00:23:29.973 Firmware Slot Information 00:23:29.973 ========================= 00:23:29.973 Active slot: 1 00:23:29.973 Slot 1 Firmware Revision: 24.05 00:23:29.973 00:23:29.973 00:23:29.973 Commands Supported and Effects 00:23:29.973 ============================== 00:23:29.973 Admin Commands 00:23:29.973 -------------- 00:23:29.973 Get Log Page (02h): Supported 00:23:29.973 Identify (06h): Supported 00:23:29.973 Abort (08h): Supported 00:23:29.973 Set Features (09h): Supported 00:23:29.973 Get Features (0Ah): Supported 00:23:29.973 Asynchronous Event Request (0Ch): Supported 00:23:29.973 Keep Alive (18h): Supported 00:23:29.973 I/O Commands 00:23:29.973 ------------ 00:23:29.973 Flush (00h): Supported LBA-Change 00:23:29.973 Write (01h): Supported LBA-Change 00:23:29.973 Read (02h): Supported 00:23:29.973 Compare (05h): Supported 00:23:29.973 Write Zeroes (08h): Supported LBA-Change 00:23:29.973 Dataset Management (09h): Supported LBA-Change 00:23:29.973 Copy (19h): Supported LBA-Change 00:23:29.973 Unknown (79h): Supported LBA-Change 00:23:29.973 Unknown (7Ah): Supported 00:23:29.973 00:23:29.973 Error Log 00:23:29.973 ========= 00:23:29.973 00:23:29.973 Arbitration 00:23:29.973 =========== 00:23:29.973 Arbitration Burst: 1 00:23:29.973 00:23:29.973 Power Management 00:23:29.973 ================ 00:23:29.973 Number of Power States: 1 00:23:29.973 Current Power State: Power State #0 00:23:29.973 Power State #0: 00:23:29.973 Max Power: 0.00 W 00:23:29.973 Non-Operational State: Operational 00:23:29.973 Entry Latency: Not Reported 00:23:29.973 Exit Latency: Not Reported 00:23:29.973 Relative Read Throughput: 0 00:23:29.973 Relative Read Latency: 0 00:23:29.973 Relative Write Throughput: 0 00:23:29.973 Relative Write Latency: 0 00:23:29.973 Idle Power: Not Reported 00:23:29.973 Active Power: Not Reported 00:23:29.973 Non-Operational Permissive Mode: Not Supported 00:23:29.973 00:23:29.973 Health Information 00:23:29.973 ================== 00:23:29.973 Critical Warnings: 00:23:29.973 Available Spare Space: OK 00:23:29.973 Temperature: OK 00:23:29.973 Device Reliability: OK 00:23:29.973 Read Only: No 00:23:29.973 Volatile Memory Backup: OK 00:23:29.973 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:29.973 Temperature Threshold: [2024-05-15 01:26:05.539266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539272] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x12b2ca0) 00:23:29.973 [2024-05-15 01:26:05.539279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.973 [2024-05-15 01:26:05.539293] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131d320, cid 7, qid 0 00:23:29.973 [2024-05-15 01:26:05.539566] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.973 [2024-05-15 01:26:05.539572] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.973 [2024-05-15 01:26:05.539577] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539583] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131d320) on tqpair=0x12b2ca0 00:23:29.973 [2024-05-15 01:26:05.539613] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:29.973 [2024-05-15 01:26:05.539626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.973 [2024-05-15 01:26:05.539634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.973 [2024-05-15 01:26:05.539641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.973 [2024-05-15 01:26:05.539649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:29.973 [2024-05-15 01:26:05.539657] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539667] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.973 [2024-05-15 01:26:05.539674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.973 [2024-05-15 01:26:05.539688] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.973 [2024-05-15 01:26:05.539834] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.973 [2024-05-15 01:26:05.539842] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.973 [2024-05-15 01:26:05.539846] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539851] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.973 [2024-05-15 01:26:05.539860] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539865] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.539870] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.973 [2024-05-15 01:26:05.539877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.973 [2024-05-15 01:26:05.539893] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.973 [2024-05-15 01:26:05.540030] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.973 [2024-05-15 01:26:05.540037] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.973 [2024-05-15 01:26:05.540042] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.540047] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.973 [2024-05-15 01:26:05.540054] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:29.973 [2024-05-15 01:26:05.540059] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:29.973 [2024-05-15 01:26:05.540070] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.540075] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.540080] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.973 [2024-05-15 01:26:05.540087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.973 [2024-05-15 01:26:05.540099] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.973 [2024-05-15 01:26:05.540250] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.973 [2024-05-15 01:26:05.540258] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.973 [2024-05-15 01:26:05.540263] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.540270] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.973 [2024-05-15 01:26:05.540283] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.540288] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.973 [2024-05-15 01:26:05.540293] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.973 [2024-05-15 01:26:05.540301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.973 [2024-05-15 01:26:05.540313] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.973 [2024-05-15 01:26:05.540430] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.973 [2024-05-15 01:26:05.540437] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.974 [2024-05-15 01:26:05.540442] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540447] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.974 [2024-05-15 01:26:05.540457] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540462] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540467] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.974 [2024-05-15 01:26:05.540474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.974 [2024-05-15 01:26:05.540486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.974 [2024-05-15 01:26:05.540614] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.974 [2024-05-15 01:26:05.540621] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.974 [2024-05-15 01:26:05.540626] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540631] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.974 [2024-05-15 01:26:05.540642] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540647] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540652] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.974 [2024-05-15 01:26:05.540659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.974 [2024-05-15 01:26:05.540670] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.974 [2024-05-15 01:26:05.540826] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.974 [2024-05-15 01:26:05.540833] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.974 [2024-05-15 01:26:05.540838] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540843] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.974 [2024-05-15 01:26:05.540854] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540859] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.540864] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.974 [2024-05-15 01:26:05.540871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.974 [2024-05-15 01:26:05.540883] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.974 [2024-05-15 01:26:05.541038] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.974 [2024-05-15 01:26:05.541045] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.974 [2024-05-15 01:26:05.541050] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.541055] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.974 [2024-05-15 01:26:05.541069] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.541074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.541079] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.974 [2024-05-15 01:26:05.541086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.974 [2024-05-15 01:26:05.541098] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.974 [2024-05-15 01:26:05.545198] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.974 [2024-05-15 01:26:05.545209] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.974 [2024-05-15 01:26:05.545213] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.545218] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.974 [2024-05-15 01:26:05.545232] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.545237] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.545242] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12b2ca0) 00:23:29.974 [2024-05-15 01:26:05.545249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:29.974 [2024-05-15 01:26:05.545263] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x131cda0, cid 3, qid 0 00:23:29.974 [2024-05-15 01:26:05.545496] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:29.974 [2024-05-15 01:26:05.545503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:29.974 [2024-05-15 01:26:05.545507] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:29.974 [2024-05-15 01:26:05.545512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x131cda0) on tqpair=0x12b2ca0 00:23:29.974 [2024-05-15 01:26:05.545522] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:23:29.974 0 Kelvin (-273 Celsius) 00:23:29.974 Available Spare: 0% 00:23:29.974 Available Spare Threshold: 0% 00:23:29.974 Life Percentage Used: 0% 00:23:29.974 Data Units Read: 0 00:23:29.974 Data Units Written: 0 00:23:29.974 Host Read Commands: 0 00:23:29.974 Host Write Commands: 0 00:23:29.974 Controller Busy Time: 0 minutes 00:23:29.974 Power Cycles: 0 00:23:29.974 Power On Hours: 0 hours 00:23:29.974 Unsafe Shutdowns: 0 00:23:29.974 Unrecoverable Media Errors: 0 00:23:29.974 Lifetime Error Log Entries: 0 00:23:29.974 Warning Temperature Time: 0 minutes 00:23:29.974 Critical Temperature Time: 0 minutes 00:23:29.974 00:23:29.974 Number of Queues 00:23:29.974 ================ 00:23:29.974 Number of I/O Submission Queues: 127 00:23:29.974 Number of I/O Completion Queues: 127 00:23:29.974 00:23:29.974 Active Namespaces 00:23:29.974 ================= 00:23:29.974 Namespace ID:1 00:23:29.974 Error Recovery Timeout: Unlimited 00:23:29.974 Command Set Identifier: NVM (00h) 00:23:29.974 Deallocate: Supported 00:23:29.974 Deallocated/Unwritten Error: Not Supported 00:23:29.974 Deallocated Read Value: Unknown 00:23:29.974 Deallocate in Write Zeroes: Not Supported 00:23:29.974 Deallocated Guard Field: 0xFFFF 00:23:29.974 Flush: Supported 00:23:29.974 Reservation: Supported 00:23:29.974 Namespace Sharing Capabilities: Multiple Controllers 00:23:29.974 Size (in LBAs): 131072 (0GiB) 00:23:29.974 Capacity (in LBAs): 131072 (0GiB) 00:23:29.974 Utilization (in LBAs): 131072 (0GiB) 00:23:29.974 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:29.974 EUI64: ABCDEF0123456789 00:23:29.974 UUID: d5b30bdb-0c56-4423-94a2-376c21015c80 00:23:29.974 Thin Provisioning: Not Supported 00:23:29.974 Per-NS Atomic Units: Yes 00:23:29.974 Atomic Boundary Size (Normal): 0 00:23:29.974 Atomic Boundary Size (PFail): 0 00:23:29.974 Atomic Boundary Offset: 0 00:23:29.974 Maximum Single Source Range Length: 65535 00:23:29.974 Maximum Copy Length: 65535 00:23:29.974 Maximum Source Range Count: 1 00:23:29.974 NGUID/EUI64 Never Reused: No 00:23:29.974 Namespace Write Protected: No 00:23:29.974 Number of LBA Formats: 1 00:23:29.974 Current LBA Format: LBA Format #00 00:23:29.974 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:29.974 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:29.974 rmmod nvme_tcp 00:23:29.974 rmmod nvme_fabrics 00:23:29.974 rmmod nvme_keyring 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 4190349 ']' 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 4190349 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 4190349 ']' 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 4190349 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:29.974 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 4190349 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 4190349' 00:23:30.234 killing process with pid 4190349 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 4190349 00:23:30.234 [2024-05-15 01:26:05.688770] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 4190349 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.234 01:26:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.770 01:26:07 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:32.770 00:23:32.770 real 0m10.491s 00:23:32.770 user 0m7.775s 00:23:32.770 sys 0m5.396s 00:23:32.770 01:26:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:32.770 01:26:07 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:32.770 ************************************ 00:23:32.770 END TEST nvmf_identify 00:23:32.770 ************************************ 00:23:32.770 01:26:08 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:32.770 01:26:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:32.770 01:26:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:32.770 01:26:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:32.770 ************************************ 00:23:32.770 START TEST nvmf_perf 00:23:32.771 ************************************ 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:32.771 * Looking for test storage... 00:23:32.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:32.771 01:26:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:39.368 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:39.369 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:39.369 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:39.369 Found net devices under 0000:af:00.0: cvl_0_0 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:39.369 Found net devices under 0000:af:00.1: cvl_0_1 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:23:39.369 00:23:39.369 --- 10.0.0.2 ping statistics --- 00:23:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.369 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:23:39.369 01:26:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:23:39.369 00:23:39.369 --- 10.0.0.1 ping statistics --- 00:23:39.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.369 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=404 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 404 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 404 ']' 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:39.369 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:39.628 [2024-05-15 01:26:15.076457] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:23:39.628 [2024-05-15 01:26:15.076502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.628 [2024-05-15 01:26:15.148097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.628 [2024-05-15 01:26:15.223800] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.628 [2024-05-15 01:26:15.223833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.628 [2024-05-15 01:26:15.223843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.628 [2024-05-15 01:26:15.223852] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.628 [2024-05-15 01:26:15.223859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.628 [2024-05-15 01:26:15.223903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.628 [2024-05-15 01:26:15.223997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.628 [2024-05-15 01:26:15.224080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.628 [2024-05-15 01:26:15.224082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:40.565 01:26:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:43.854 01:26:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:43.854 01:26:18 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:43.854 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.854 [2024-05-15 01:26:19.515962] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.113 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:44.113 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:44.113 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:44.373 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:44.373 01:26:19 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:44.632 01:26:20 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:44.632 [2024-05-15 01:26:20.250465] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:44.632 [2024-05-15 01:26:20.250742] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.632 01:26:20 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:44.891 01:26:20 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:23:44.891 01:26:20 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:44.891 01:26:20 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:44.891 01:26:20 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:23:46.268 Initializing NVMe Controllers 00:23:46.268 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:23:46.268 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:23:46.268 Initialization complete. Launching workers. 00:23:46.268 ======================================================== 00:23:46.268 Latency(us) 00:23:46.268 Device Information : IOPS MiB/s Average min max 00:23:46.268 PCIE (0000:d8:00.0) NSID 1 from core 0: 102067.90 398.70 312.99 39.70 4287.79 00:23:46.269 ======================================================== 00:23:46.269 Total : 102067.90 398.70 312.99 39.70 4287.79 00:23:46.269 00:23:46.269 01:26:21 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:46.269 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.646 Initializing NVMe Controllers 00:23:47.646 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:47.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:47.646 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:47.646 Initialization complete. Launching workers. 00:23:47.646 ======================================================== 00:23:47.646 Latency(us) 00:23:47.646 Device Information : IOPS MiB/s Average min max 00:23:47.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 65.77 0.26 15347.39 447.56 45077.14 00:23:47.646 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 67.76 0.26 14992.34 7957.62 47899.63 00:23:47.646 ======================================================== 00:23:47.646 Total : 133.53 0.52 15167.21 447.56 47899.63 00:23:47.646 00:23:47.646 01:26:23 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:47.646 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.023 Initializing NVMe Controllers 00:23:49.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:49.023 Initialization complete. Launching workers. 00:23:49.023 ======================================================== 00:23:49.023 Latency(us) 00:23:49.023 Device Information : IOPS MiB/s Average min max 00:23:49.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8351.99 32.62 3841.59 759.93 8756.05 00:23:49.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3856.00 15.06 8337.06 5555.29 15976.70 00:23:49.023 ======================================================== 00:23:49.023 Total : 12207.99 47.69 5261.52 759.93 15976.70 00:23:49.023 00:23:49.023 01:26:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:49.023 01:26:24 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:49.023 01:26:24 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:49.023 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.561 Initializing NVMe Controllers 00:23:51.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.561 Controller IO queue size 128, less than required. 00:23:51.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.561 Controller IO queue size 128, less than required. 00:23:51.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:51.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:51.561 Initialization complete. Launching workers. 00:23:51.561 ======================================================== 00:23:51.561 Latency(us) 00:23:51.561 Device Information : IOPS MiB/s Average min max 00:23:51.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 935.41 233.85 143817.34 80858.59 208209.28 00:23:51.561 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.13 149.53 224299.14 103720.62 350692.78 00:23:51.561 ======================================================== 00:23:51.561 Total : 1533.54 383.39 175207.60 80858.59 350692.78 00:23:51.561 00:23:51.561 01:26:26 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:51.561 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.561 No valid NVMe controllers or AIO or URING devices found 00:23:51.561 Initializing NVMe Controllers 00:23:51.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:51.561 Controller IO queue size 128, less than required. 00:23:51.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.561 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:51.561 Controller IO queue size 128, less than required. 00:23:51.561 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:51.561 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:51.561 WARNING: Some requested NVMe devices were skipped 00:23:51.561 01:26:27 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:51.561 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.099 Initializing NVMe Controllers 00:23:54.099 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.099 Controller IO queue size 128, less than required. 00:23:54.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.099 Controller IO queue size 128, less than required. 00:23:54.099 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:54.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:54.100 Initialization complete. Launching workers. 00:23:54.100 00:23:54.100 ==================== 00:23:54.100 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:54.100 TCP transport: 00:23:54.100 polls: 48655 00:23:54.100 idle_polls: 16540 00:23:54.100 sock_completions: 32115 00:23:54.100 nvme_completions: 3735 00:23:54.100 submitted_requests: 5590 00:23:54.100 queued_requests: 1 00:23:54.100 00:23:54.100 ==================== 00:23:54.100 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:54.100 TCP transport: 00:23:54.100 polls: 47310 00:23:54.100 idle_polls: 14897 00:23:54.100 sock_completions: 32413 00:23:54.100 nvme_completions: 3717 00:23:54.100 submitted_requests: 5516 00:23:54.100 queued_requests: 1 00:23:54.100 ======================================================== 00:23:54.100 Latency(us) 00:23:54.100 Device Information : IOPS MiB/s Average min max 00:23:54.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 932.37 233.09 141226.47 69584.52 245364.06 00:23:54.100 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 927.87 231.97 142006.30 59653.81 203329.47 00:23:54.100 ======================================================== 00:23:54.100 Total : 1860.24 465.06 141615.45 59653.81 245364.06 00:23:54.100 00:23:54.100 01:26:29 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:54.100 01:26:29 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.359 rmmod nvme_tcp 00:23:54.359 rmmod nvme_fabrics 00:23:54.359 rmmod nvme_keyring 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 404 ']' 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 404 00:23:54.359 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 404 ']' 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 404 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 404 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 404' 00:23:54.360 killing process with pid 404 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 404 00:23:54.360 [2024-05-15 01:26:29.936548] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:54.360 01:26:29 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 404 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.960 01:26:32 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.869 01:26:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.869 00:23:58.869 real 0m26.030s 00:23:58.869 user 1m7.373s 00:23:58.869 sys 0m8.567s 00:23:58.869 01:26:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:58.869 01:26:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:58.869 ************************************ 00:23:58.869 END TEST nvmf_perf 00:23:58.869 ************************************ 00:23:58.869 01:26:34 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:58.869 01:26:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:58.869 01:26:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:58.869 01:26:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:58.869 ************************************ 00:23:58.869 START TEST nvmf_fio_host 00:23:58.869 ************************************ 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:58.869 * Looking for test storage... 00:23:58.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.869 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.870 01:26:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:05.443 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:05.443 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:05.443 Found net devices under 0000:af:00.0: cvl_0_0 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:05.443 Found net devices under 0000:af:00.1: cvl_0_1 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:05.443 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:05.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:24:05.444 00:24:05.444 --- 10.0.0.2 ping statistics --- 00:24:05.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.444 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:24:05.444 00:24:05.444 --- 10.0.0.1 ping statistics --- 00:24:05.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.444 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:05.444 01:26:40 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=7331 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 7331 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 7331 ']' 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:05.444 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.444 [2024-05-15 01:26:41.058248] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:24:05.444 [2024-05-15 01:26:41.058305] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.444 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.444 [2024-05-15 01:26:41.131922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.703 [2024-05-15 01:26:41.201983] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.703 [2024-05-15 01:26:41.202024] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.703 [2024-05-15 01:26:41.202033] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.703 [2024-05-15 01:26:41.202041] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.703 [2024-05-15 01:26:41.202064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.703 [2024-05-15 01:26:41.202122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.703 [2024-05-15 01:26:41.202220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.703 [2024-05-15 01:26:41.202264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.703 [2024-05-15 01:26:41.202266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.271 [2024-05-15 01:26:41.879923] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.271 Malloc1 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.271 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.531 [2024-05-15 01:26:41.978728] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:06.531 [2024-05-15 01:26:41.979001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:06.531 01:26:41 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:06.531 01:26:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:06.791 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:06.791 fio-3.35 00:24:06.791 Starting 1 thread 00:24:06.791 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.330 00:24:09.330 test: (groupid=0, jobs=1): err= 0: pid=7753: Wed May 15 01:26:44 2024 00:24:09.330 read: IOPS=11.9k, BW=46.5MiB/s (48.7MB/s)(93.2MiB/2006msec) 00:24:09.330 slat (nsec): min=1547, max=274304, avg=1683.35, stdev=2362.45 00:24:09.330 clat (usec): min=3018, max=14641, avg=6218.91, stdev=1418.10 00:24:09.330 lat (usec): min=3020, max=14643, avg=6220.59, stdev=1418.30 00:24:09.330 clat percentiles (usec): 00:24:09.330 | 1.00th=[ 4080], 5.00th=[ 4686], 10.00th=[ 5014], 20.00th=[ 5342], 00:24:09.330 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6063], 00:24:09.330 | 70.00th=[ 6259], 80.00th=[ 6718], 90.00th=[ 8029], 95.00th=[ 9372], 00:24:09.330 | 99.00th=[11863], 99.50th=[12518], 99.90th=[13566], 99.95th=[13698], 00:24:09.330 | 99.99th=[14353] 00:24:09.330 bw ( KiB/s): min=46408, max=48832, per=100.00%, avg=47598.00, stdev=1006.67, samples=4 00:24:09.330 iops : min=11602, max=12208, avg=11899.50, stdev=251.67, samples=4 00:24:09.330 write: IOPS=11.8k, BW=46.3MiB/s (48.5MB/s)(92.8MiB/2006msec); 0 zone resets 00:24:09.330 slat (nsec): min=1595, max=223257, avg=1751.87, stdev=1602.42 00:24:09.330 clat (usec): min=1917, max=9889, avg=4527.05, stdev=774.64 00:24:09.330 lat (usec): min=1919, max=9891, avg=4528.81, stdev=774.80 00:24:09.330 clat percentiles (usec): 00:24:09.330 | 1.00th=[ 2802], 5.00th=[ 3261], 10.00th=[ 3589], 20.00th=[ 3982], 00:24:09.330 | 30.00th=[ 4228], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4686], 00:24:09.330 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5276], 95.00th=[ 5800], 00:24:09.330 | 99.00th=[ 6980], 99.50th=[ 7504], 99.90th=[ 8848], 99.95th=[ 9110], 00:24:09.330 | 99.99th=[ 9896] 00:24:09.330 bw ( KiB/s): min=46776, max=48152, per=100.00%, avg=47406.00, stdev=651.61, samples=4 00:24:09.330 iops : min=11694, max=12038, avg=11851.50, stdev=162.90, samples=4 00:24:09.330 lat (msec) : 2=0.01%, 4=10.49%, 10=87.86%, 20=1.64% 00:24:09.330 cpu : usr=63.94%, sys=30.22%, ctx=43, majf=0, minf=4 00:24:09.330 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:09.330 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:09.330 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:09.330 issued rwts: total=23869,23760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:09.330 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:09.330 00:24:09.330 Run status group 0 (all jobs): 00:24:09.330 READ: bw=46.5MiB/s (48.7MB/s), 46.5MiB/s-46.5MiB/s (48.7MB/s-48.7MB/s), io=93.2MiB (97.8MB), run=2006-2006msec 00:24:09.330 WRITE: bw=46.3MiB/s (48.5MB/s), 46.3MiB/s-46.3MiB/s (48.5MB/s-48.5MB/s), io=92.8MiB (97.3MB), run=2006-2006msec 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:09.330 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:09.331 01:26:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:09.589 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:09.589 fio-3.35 00:24:09.589 Starting 1 thread 00:24:09.589 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.123 00:24:12.123 test: (groupid=0, jobs=1): err= 0: pid=8384: Wed May 15 01:26:47 2024 00:24:12.123 read: IOPS=9901, BW=155MiB/s (162MB/s)(310MiB/2006msec) 00:24:12.123 slat (nsec): min=2442, max=79724, avg=2798.84, stdev=1297.90 00:24:12.123 clat (usec): min=2331, max=49617, avg=8075.45, stdev=4191.19 00:24:12.123 lat (usec): min=2334, max=49620, avg=8078.25, stdev=4191.44 00:24:12.123 clat percentiles (usec): 00:24:12.123 | 1.00th=[ 3785], 5.00th=[ 4555], 10.00th=[ 5145], 20.00th=[ 5932], 00:24:12.123 | 30.00th=[ 6456], 40.00th=[ 6980], 50.00th=[ 7570], 60.00th=[ 8029], 00:24:12.123 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10159], 95.00th=[11994], 00:24:12.123 | 99.00th=[24773], 99.50th=[44827], 99.90th=[48497], 99.95th=[49021], 00:24:12.123 | 99.99th=[49546] 00:24:12.123 bw ( KiB/s): min=68256, max=90496, per=49.74%, avg=78800.00, stdev=9489.52, samples=4 00:24:12.123 iops : min= 4266, max= 5656, avg=4925.00, stdev=593.09, samples=4 00:24:12.123 write: IOPS=6018, BW=94.0MiB/s (98.6MB/s)(160MiB/1706msec); 0 zone resets 00:24:12.123 slat (usec): min=28, max=396, avg=30.49, stdev= 8.02 00:24:12.123 clat (usec): min=2529, max=25848, avg=8702.06, stdev=2627.18 00:24:12.123 lat (usec): min=2559, max=25897, avg=8732.55, stdev=2630.94 00:24:12.123 clat percentiles (usec): 00:24:12.123 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242], 00:24:12.123 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8291], 60.00th=[ 8586], 00:24:12.123 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[11207], 00:24:12.123 | 99.00th=[24511], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:24:12.123 | 99.99th=[25822] 00:24:12.123 bw ( KiB/s): min=71776, max=93408, per=85.12%, avg=81960.00, stdev=8983.45, samples=4 00:24:12.123 iops : min= 4486, max= 5838, avg=5122.50, stdev=561.47, samples=4 00:24:12.123 lat (msec) : 4=1.21%, 10=87.20%, 20=9.78%, 50=1.82% 00:24:12.123 cpu : usr=83.05%, sys=14.26%, ctx=19, majf=0, minf=1 00:24:12.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:24:12.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:12.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:12.123 issued rwts: total=19863,10267,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:12.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:12.123 00:24:12.123 Run status group 0 (all jobs): 00:24:12.123 READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=310MiB (325MB), run=2006-2006msec 00:24:12.123 WRITE: bw=94.0MiB/s (98.6MB/s), 94.0MiB/s-94.0MiB/s (98.6MB/s-98.6MB/s), io=160MiB (168MB), run=1706-1706msec 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.123 rmmod nvme_tcp 00:24:12.123 rmmod nvme_fabrics 00:24:12.123 rmmod nvme_keyring 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 7331 ']' 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 7331 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 7331 ']' 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 7331 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 7331 00:24:12.123 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 7331' 00:24:12.124 killing process with pid 7331 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 7331 00:24:12.124 [2024-05-15 01:26:47.496477] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 7331 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.124 01:26:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.661 01:26:49 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.661 00:24:14.661 real 0m15.588s 00:24:14.661 user 0m46.506s 00:24:14.661 sys 0m7.220s 00:24:14.661 01:26:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:14.661 01:26:49 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.661 ************************************ 00:24:14.661 END TEST nvmf_fio_host 00:24:14.661 ************************************ 00:24:14.661 01:26:49 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.661 01:26:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:14.661 01:26:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:14.661 01:26:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.661 ************************************ 00:24:14.661 START TEST nvmf_failover 00:24:14.661 ************************************ 00:24:14.661 01:26:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:14.661 * Looking for test storage... 00:24:14.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.661 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.662 01:26:50 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:21.226 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:21.226 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:21.226 Found net devices under 0000:af:00.0: cvl_0_0 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:21.226 Found net devices under 0000:af:00.1: cvl_0_1 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.226 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.227 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.227 01:26:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:24:21.227 00:24:21.227 --- 10.0.0.2 ping statistics --- 00:24:21.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.227 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:24:21.227 00:24:21.227 --- 10.0.0.1 ping statistics --- 00:24:21.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.227 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=12352 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 12352 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 12352 ']' 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:21.227 01:26:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.227 [2024-05-15 01:26:56.270422] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:24:21.227 [2024-05-15 01:26:56.270469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.227 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.227 [2024-05-15 01:26:56.344795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:21.227 [2024-05-15 01:26:56.420054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.227 [2024-05-15 01:26:56.420090] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.227 [2024-05-15 01:26:56.420099] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.227 [2024-05-15 01:26:56.420108] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.227 [2024-05-15 01:26:56.420132] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.227 [2024-05-15 01:26:56.420184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.227 [2024-05-15 01:26:56.420266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.227 [2024-05-15 01:26:56.420268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.486 01:26:57 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:21.763 [2024-05-15 01:26:57.285088] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.763 01:26:57 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:22.036 Malloc0 00:24:22.036 01:26:57 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.036 01:26:57 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.296 01:26:57 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.555 [2024-05-15 01:26:58.053476] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:22.555 [2024-05-15 01:26:58.053708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.555 01:26:58 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.555 [2024-05-15 01:26:58.230132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:22.814 [2024-05-15 01:26:58.398687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=12658 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 12658 /var/tmp/bdevperf.sock 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 12658 ']' 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:22.814 01:26:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:23.749 01:26:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:23.749 01:26:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:23.749 01:26:59 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.007 NVMe0n1 00:24:24.266 01:26:59 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.525 00:24:24.525 01:26:59 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.525 01:26:59 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=12929 00:24:24.525 01:26:59 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:25.459 01:27:00 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.717 [2024-05-15 01:27:01.155672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155722] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155732] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155762] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.717 [2024-05-15 01:27:01.155779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155805] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155831] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155865] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155892] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155901] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155918] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155927] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155935] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155944] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155961] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155977] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155986] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.155994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156011] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156021] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156038] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156047] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156063] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156091] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156109] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156117] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156151] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156177] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156185] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156198] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156207] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156241] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156267] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156293] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156312] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156331] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156348] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156373] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156391] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156399] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156416] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156442] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156468] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156494] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156503] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156565] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156601] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156652] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156661] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.718 [2024-05-15 01:27:01.156669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156678] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156695] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156703] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156712] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156737] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156763] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156771] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156780] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156788] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156806] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 [2024-05-15 01:27:01.156815] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13fdf00 is same with the state(5) to be set 00:24:25.719 01:27:01 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:29.011 01:27:04 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:29.011 00:24:29.011 01:27:04 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:29.011 [2024-05-15 01:27:04.602622] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602673] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602682] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602691] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602700] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602709] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602727] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602735] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602744] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602796] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602804] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602853] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602870] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602879] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602913] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602922] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602931] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602940] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602949] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602958] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602967] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.602992] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603001] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603009] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603044] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603053] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603072] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603081] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603098] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603116] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 [2024-05-15 01:27:04.603133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13feac0 is same with the state(5) to be set 00:24:29.011 01:27:04 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:32.301 01:27:07 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.301 [2024-05-15 01:27:07.797432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.301 01:27:07 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:33.237 01:27:08 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:33.496 [2024-05-15 01:27:08.992432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992472] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992492] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992510] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 [2024-05-15 01:27:08.992527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1255740 is same with the state(5) to be set 00:24:33.496 01:27:09 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 12929 00:24:40.071 0 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 12658 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 12658 ']' 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 12658 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 12658 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 12658' 00:24:40.071 killing process with pid 12658 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 12658 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 12658 00:24:40.071 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:40.071 [2024-05-15 01:26:58.459676] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:24:40.071 [2024-05-15 01:26:58.459732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid12658 ] 00:24:40.071 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.071 [2024-05-15 01:26:58.529988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.071 [2024-05-15 01:26:58.601822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.071 Running I/O for 15 seconds... 00:24:40.071 [2024-05-15 01:27:01.157310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.071 [2024-05-15 01:27:01.157601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.071 [2024-05-15 01:27:01.157611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:104024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:104048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:104096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:104136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.157985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.157995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:104224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.072 [2024-05-15 01:27:01.158274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.072 [2024-05-15 01:27:01.158294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.072 [2024-05-15 01:27:01.158322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.072 [2024-05-15 01:27:01.158342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.072 [2024-05-15 01:27:01.158363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.072 [2024-05-15 01:27:01.158382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.072 [2024-05-15 01:27:01.158402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.072 [2024-05-15 01:27:01.158412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.158989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.158998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:104576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:104608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.073 [2024-05-15 01:27:01.159209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:104632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.073 [2024-05-15 01:27:01.159218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:104640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:104704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:104712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.074 [2024-05-15 01:27:01.159493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159515] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104752 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104760 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104768 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159621] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104776 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159654] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104784 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104792 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104800 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104808 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104816 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104824 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104832 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104840 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104848 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104856 len:8 PRP1 0x0 PRP2 0x0 00:24:40.074 [2024-05-15 01:27:01.159968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.074 [2024-05-15 01:27:01.159977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.074 [2024-05-15 01:27:01.159984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.074 [2024-05-15 01:27:01.159991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104864 len:8 PRP1 0x0 PRP2 0x0 00:24:40.075 [2024-05-15 01:27:01.160000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.160009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.075 [2024-05-15 01:27:01.160016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.075 [2024-05-15 01:27:01.160024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104872 len:8 PRP1 0x0 PRP2 0x0 00:24:40.075 [2024-05-15 01:27:01.160033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.173902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.075 [2024-05-15 01:27:01.173917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.075 [2024-05-15 01:27:01.173928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104880 len:8 PRP1 0x0 PRP2 0x0 00:24:40.075 [2024-05-15 01:27:01.173940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.173952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.075 [2024-05-15 01:27:01.173963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.075 [2024-05-15 01:27:01.173973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104888 len:8 PRP1 0x0 PRP2 0x0 00:24:40.075 [2024-05-15 01:27:01.173987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.075 [2024-05-15 01:27:01.174009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.075 [2024-05-15 01:27:01.174020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104896 len:8 PRP1 0x0 PRP2 0x0 00:24:40.075 [2024-05-15 01:27:01.174032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.075 [2024-05-15 01:27:01.174054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.075 [2024-05-15 01:27:01.174064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104904 len:8 PRP1 0x0 PRP2 0x0 00:24:40.075 [2024-05-15 01:27:01.174076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174125] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x166a7d0 was disconnected and freed. reset controller. 00:24:40.075 [2024-05-15 01:27:01.174146] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:40.075 [2024-05-15 01:27:01.174176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.075 [2024-05-15 01:27:01.174188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.075 [2024-05-15 01:27:01.174228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.075 [2024-05-15 01:27:01.174253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.075 [2024-05-15 01:27:01.174278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:01.174290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.075 [2024-05-15 01:27:01.174330] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164b590 (9): Bad file descriptor 00:24:40.075 [2024-05-15 01:27:01.177942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.075 [2024-05-15 01:27:01.208822] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:40.075 [2024-05-15 01:27:04.604395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.075 [2024-05-15 01:27:04.604683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.075 [2024-05-15 01:27:04.604792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.075 [2024-05-15 01:27:04.604801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.604820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.604840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.604987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.604996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.076 [2024-05-15 01:27:04.605155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:40376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:40392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.076 [2024-05-15 01:27:04.605501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.076 [2024-05-15 01:27:04.605510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:40504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:40552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:40584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.077 [2024-05-15 01:27:04.605958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.605987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.605997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.077 [2024-05-15 01:27:04.606253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.077 [2024-05-15 01:27:04.606263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:40768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.078 [2024-05-15 01:27:04.606592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40856 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40864 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40872 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40880 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40888 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40896 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606814] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40904 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40912 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40920 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40928 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40936 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.606973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.606982] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.606989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.606996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40944 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.607005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.607014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.607022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.607030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40952 len:8 PRP1 0x0 PRP2 0x0 00:24:40.078 [2024-05-15 01:27:04.607038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.078 [2024-05-15 01:27:04.607048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.078 [2024-05-15 01:27:04.607055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.078 [2024-05-15 01:27:04.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:8 PRP1 0x0 PRP2 0x0 00:24:40.079 [2024-05-15 01:27:04.607071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.607080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.079 [2024-05-15 01:27:04.607087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.079 [2024-05-15 01:27:04.607094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40968 len:8 PRP1 0x0 PRP2 0x0 00:24:40.079 [2024-05-15 01:27:04.607103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.607112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.079 [2024-05-15 01:27:04.607120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.079 [2024-05-15 01:27:04.607127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40976 len:8 PRP1 0x0 PRP2 0x0 00:24:40.079 [2024-05-15 01:27:04.619204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.079 [2024-05-15 01:27:04.619231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.079 [2024-05-15 01:27:04.619242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40984 len:8 PRP1 0x0 PRP2 0x0 00:24:40.079 [2024-05-15 01:27:04.619254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.079 [2024-05-15 01:27:04.619276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.079 [2024-05-15 01:27:04.619287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40992 len:8 PRP1 0x0 PRP2 0x0 00:24:40.079 [2024-05-15 01:27:04.619299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.079 [2024-05-15 01:27:04.619322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.079 [2024-05-15 01:27:04.619331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41000 len:8 PRP1 0x0 PRP2 0x0 00:24:40.079 [2024-05-15 01:27:04.619343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619391] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1814f40 was disconnected and freed. reset controller. 00:24:40.079 [2024-05-15 01:27:04.619405] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:40.079 [2024-05-15 01:27:04.619432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.079 [2024-05-15 01:27:04.619445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.079 [2024-05-15 01:27:04.619473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.079 [2024-05-15 01:27:04.619497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.079 [2024-05-15 01:27:04.619522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:04.619534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.079 [2024-05-15 01:27:04.619563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164b590 (9): Bad file descriptor 00:24:40.079 [2024-05-15 01:27:04.623178] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.079 [2024-05-15 01:27:04.656399] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:40.079 [2024-05-15 01:27:08.993026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.079 [2024-05-15 01:27:08.993062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.079 [2024-05-15 01:27:08.993090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.079 [2024-05-15 01:27:08.993110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.079 [2024-05-15 01:27:08.993130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.079 [2024-05-15 01:27:08.993150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.079 [2024-05-15 01:27:08.993527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.079 [2024-05-15 01:27:08.993536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.080 [2024-05-15 01:27:08.993930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.080 [2024-05-15 01:27:08.993940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.993949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.993960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.993973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.993983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.993992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.081 [2024-05-15 01:27:08.994575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.081 [2024-05-15 01:27:08.994584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.994976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.994987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.994996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.082 [2024-05-15 01:27:08.995170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.082 [2024-05-15 01:27:08.995362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.082 [2024-05-15 01:27:08.995371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.083 [2024-05-15 01:27:08.995567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.083 [2024-05-15 01:27:08.995599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.083 [2024-05-15 01:27:08.995607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80480 len:8 PRP1 0x0 PRP2 0x0 00:24:40.083 [2024-05-15 01:27:08.995616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995659] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1814d30 was disconnected and freed. reset controller. 00:24:40.083 [2024-05-15 01:27:08.995671] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:40.083 [2024-05-15 01:27:08.995692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.083 [2024-05-15 01:27:08.995702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.083 [2024-05-15 01:27:08.995721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.083 [2024-05-15 01:27:08.995739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:40.083 [2024-05-15 01:27:08.995758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.083 [2024-05-15 01:27:08.995767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.083 [2024-05-15 01:27:08.998474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.083 [2024-05-15 01:27:08.998507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164b590 (9): Bad file descriptor 00:24:40.083 [2024-05-15 01:27:09.070655] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:40.083 00:24:40.083 Latency(us) 00:24:40.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.083 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:40.083 Verification LBA range: start 0x0 length 0x4000 00:24:40.083 NVMe0n1 : 15.01 11840.94 46.25 426.26 0.00 10412.22 835.58 24851.25 00:24:40.083 =================================================================================================================== 00:24:40.083 Total : 11840.94 46.25 426.26 0.00 10412.22 835.58 24851.25 00:24:40.083 Received shutdown signal, test time was about 15.000000 seconds 00:24:40.083 00:24:40.083 Latency(us) 00:24:40.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.083 =================================================================================================================== 00:24:40.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=16135 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 16135 /var/tmp/bdevperf.sock 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 16135 ']' 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:40.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:40.083 01:27:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:40.651 01:27:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:40.651 01:27:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:24:40.651 01:27:16 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:40.910 [2024-05-15 01:27:16.399054] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.910 01:27:16 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:40.910 [2024-05-15 01:27:16.583559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:41.169 01:27:16 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.428 NVMe0n1 00:24:41.428 01:27:17 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.686 00:24:41.946 01:27:17 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:41.946 00:24:41.946 01:27:17 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:41.946 01:27:17 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:42.236 01:27:17 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:42.497 01:27:17 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:45.783 01:27:20 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:45.783 01:27:20 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:45.783 01:27:21 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.783 01:27:21 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=16958 00:24:45.783 01:27:21 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 16958 00:24:46.721 0 00:24:46.721 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:46.721 [2024-05-15 01:27:15.446747] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:24:46.721 [2024-05-15 01:27:15.446805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid16135 ] 00:24:46.721 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.721 [2024-05-15 01:27:15.517802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.721 [2024-05-15 01:27:15.583701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.721 [2024-05-15 01:27:17.948744] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:46.721 [2024-05-15 01:27:17.948789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.721 [2024-05-15 01:27:17.948802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.721 [2024-05-15 01:27:17.948814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.721 [2024-05-15 01:27:17.948823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.721 [2024-05-15 01:27:17.948833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.721 [2024-05-15 01:27:17.948842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.721 [2024-05-15 01:27:17.948852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.721 [2024-05-15 01:27:17.948861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.721 [2024-05-15 01:27:17.948870] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:46.721 [2024-05-15 01:27:17.948891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:46.721 [2024-05-15 01:27:17.948907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x755590 (9): Bad file descriptor 00:24:46.721 [2024-05-15 01:27:17.953188] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:46.721 Running I/O for 1 seconds... 00:24:46.721 00:24:46.721 Latency(us) 00:24:46.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.721 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:46.721 Verification LBA range: start 0x0 length 0x4000 00:24:46.721 NVMe0n1 : 1.01 12393.47 48.41 0.00 0.00 10275.24 1952.97 21076.38 00:24:46.721 =================================================================================================================== 00:24:46.721 Total : 12393.47 48.41 0.00 0.00 10275.24 1952.97 21076.38 00:24:46.721 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.721 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:46.980 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:46.980 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:46.980 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:47.238 01:27:22 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:47.497 01:27:23 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 16135 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 16135 ']' 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 16135 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:50.784 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 16135 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 16135' 00:24:50.785 killing process with pid 16135 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 16135 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 16135 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:50.785 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.043 rmmod nvme_tcp 00:24:51.043 rmmod nvme_fabrics 00:24:51.043 rmmod nvme_keyring 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 12352 ']' 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 12352 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 12352 ']' 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 12352 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:51.043 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 12352 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 12352' 00:24:51.317 killing process with pid 12352 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 12352 00:24:51.317 [2024-05-15 01:27:26.772809] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 12352 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.317 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.318 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.318 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.318 01:27:26 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.318 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.318 01:27:26 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.866 01:27:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.866 00:24:53.866 real 0m39.186s 00:24:53.866 user 2m2.567s 00:24:53.866 sys 0m9.412s 00:24:53.866 01:27:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:53.866 01:27:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:53.866 ************************************ 00:24:53.866 END TEST nvmf_failover 00:24:53.866 ************************************ 00:24:53.866 01:27:29 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:53.866 01:27:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:53.866 01:27:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:53.866 01:27:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.866 ************************************ 00:24:53.866 START TEST nvmf_host_discovery 00:24:53.866 ************************************ 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:53.866 * Looking for test storage... 00:24:53.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.866 01:27:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:00.435 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:00.435 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:00.435 Found net devices under 0000:af:00.0: cvl_0_0 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:00.435 Found net devices under 0000:af:00.1: cvl_0_1 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.435 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.436 01:27:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:25:00.436 00:25:00.436 --- 10.0.0.2 ping statistics --- 00:25:00.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.436 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:00.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:00.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:25:00.436 00:25:00.436 --- 10.0.0.1 ping statistics --- 00:25:00.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.436 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=21704 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 21704 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 21704 ']' 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:00.436 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:00.436 [2024-05-15 01:27:36.116915] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:25:00.436 [2024-05-15 01:27:36.116962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.695 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.695 [2024-05-15 01:27:36.188652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.695 [2024-05-15 01:27:36.256771] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.695 [2024-05-15 01:27:36.256811] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.695 [2024-05-15 01:27:36.256821] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.695 [2024-05-15 01:27:36.256830] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.695 [2024-05-15 01:27:36.256837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.695 [2024-05-15 01:27:36.256861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.263 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.522 [2024-05-15 01:27:36.955312] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.522 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.522 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:01.522 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.522 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.523 [2024-05-15 01:27:36.967300] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:01.523 [2024-05-15 01:27:36.967539] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.523 null0 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.523 null1 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.523 01:27:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=21740 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 21740 /tmp/host.sock 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 21740 ']' 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:01.523 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:01.523 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:01.523 [2024-05-15 01:27:37.048038] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:25:01.523 [2024-05-15 01:27:37.048084] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid21740 ] 00:25:01.523 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.523 [2024-05-15 01:27:37.117676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.523 [2024-05-15 01:27:37.194017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.459 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.460 01:27:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.460 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.460 01:27:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.460 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.719 [2024-05-15 01:27:38.202708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:02.719 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:02.720 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.978 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:25:02.978 01:27:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:03.237 [2024-05-15 01:27:38.925127] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:03.237 [2024-05-15 01:27:38.925151] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:03.237 [2024-05-15 01:27:38.925167] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:03.495 [2024-05-15 01:27:39.013432] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:03.756 [2024-05-15 01:27:39.239181] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:03.756 [2024-05-15 01:27:39.239209] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:03.756 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.016 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.016 [2024-05-15 01:27:39.706897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:04.276 [2024-05-15 01:27:39.707970] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:04.276 [2024-05-15 01:27:39.707993] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:04.276 [2024-05-15 01:27:39.796250] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:04.276 01:27:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:25:04.276 [2024-05-15 01:27:39.897807] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:04.276 [2024-05-15 01:27:39.897825] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:04.276 [2024-05-15 01:27:39.897833] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.246 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.506 [2024-05-15 01:27:40.970962] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:05.506 [2024-05-15 01:27:40.970986] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:05.506 [2024-05-15 01:27:40.971016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.506 [2024-05-15 01:27:40.971038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.506 [2024-05-15 01:27:40.971049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.506 [2024-05-15 01:27:40.971058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.506 [2024-05-15 01:27:40.971068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.506 [2024-05-15 01:27:40.971077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.506 [2024-05-15 01:27:40.971086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:05.506 [2024-05-15 01:27:40.971095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:05.506 [2024-05-15 01:27:40.971104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.506 [2024-05-15 01:27:40.981024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.506 [2024-05-15 01:27:40.991078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.506 [2024-05-15 01:27:40.991493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.506 [2024-05-15 01:27:40.991921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.506 [2024-05-15 01:27:40.991934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.506 [2024-05-15 01:27:40.991944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.506 [2024-05-15 01:27:40.991958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.506 [2024-05-15 01:27:40.991971] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.506 [2024-05-15 01:27:40.991980] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.506 [2024-05-15 01:27:40.991991] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.506 [2024-05-15 01:27:40.992003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.506 01:27:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.506 [2024-05-15 01:27:41.001135] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.506 [2024-05-15 01:27:41.001597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.506 [2024-05-15 01:27:41.002025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.506 [2024-05-15 01:27:41.002037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.506 [2024-05-15 01:27:41.002047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.506 [2024-05-15 01:27:41.002060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.506 [2024-05-15 01:27:41.002080] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.506 [2024-05-15 01:27:41.002089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.506 [2024-05-15 01:27:41.002098] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.506 [2024-05-15 01:27:41.002109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.506 [2024-05-15 01:27:41.011188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.506 [2024-05-15 01:27:41.011624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.506 [2024-05-15 01:27:41.012045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.506 [2024-05-15 01:27:41.012057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.506 [2024-05-15 01:27:41.012067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.506 [2024-05-15 01:27:41.012079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.506 [2024-05-15 01:27:41.012091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.506 [2024-05-15 01:27:41.012100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.506 [2024-05-15 01:27:41.012108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.506 [2024-05-15 01:27:41.012126] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:05.506 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.507 [2024-05-15 01:27:41.021242] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.507 [2024-05-15 01:27:41.021635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.021940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.021954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.507 [2024-05-15 01:27:41.021964] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.507 [2024-05-15 01:27:41.021979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.507 [2024-05-15 01:27:41.021993] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.507 [2024-05-15 01:27:41.022002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.507 [2024-05-15 01:27:41.022012] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.507 [2024-05-15 01:27:41.022024] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.507 [2024-05-15 01:27:41.031299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.507 [2024-05-15 01:27:41.031647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.032023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.032035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.507 [2024-05-15 01:27:41.032046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.507 [2024-05-15 01:27:41.032059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.507 [2024-05-15 01:27:41.032072] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.507 [2024-05-15 01:27:41.032080] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.507 [2024-05-15 01:27:41.032090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.507 [2024-05-15 01:27:41.032101] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.507 [2024-05-15 01:27:41.041357] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.507 [2024-05-15 01:27:41.041712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.042152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.042164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.507 [2024-05-15 01:27:41.042174] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.507 [2024-05-15 01:27:41.042187] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.507 [2024-05-15 01:27:41.042205] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.507 [2024-05-15 01:27:41.042214] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.507 [2024-05-15 01:27:41.042223] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.507 [2024-05-15 01:27:41.042242] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.507 [2024-05-15 01:27:41.051408] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:05.507 [2024-05-15 01:27:41.051852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.052202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:05.507 [2024-05-15 01:27:41.052215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f17130 with addr=10.0.0.2, port=4420 00:25:05.507 [2024-05-15 01:27:41.052224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f17130 is same with the state(5) to be set 00:25:05.507 [2024-05-15 01:27:41.052237] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f17130 (9): Bad file descriptor 00:25:05.507 [2024-05-15 01:27:41.052249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:05.507 [2024-05-15 01:27:41.052257] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:05.507 [2024-05-15 01:27:41.052266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:05.507 [2024-05-15 01:27:41.052277] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:05.507 [2024-05-15 01:27:41.059318] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:05.507 [2024-05-15 01:27:41.059335] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:05.507 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:05.766 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:05.767 01:27:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.704 [2024-05-15 01:27:42.362487] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:06.704 [2024-05-15 01:27:42.362505] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:06.704 [2024-05-15 01:27:42.362519] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:06.962 [2024-05-15 01:27:42.448772] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:07.221 [2024-05-15 01:27:42.758331] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:07.221 [2024-05-15 01:27:42.758358] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.221 request: 00:25:07.221 { 00:25:07.221 "name": "nvme", 00:25:07.221 "trtype": "tcp", 00:25:07.221 "traddr": "10.0.0.2", 00:25:07.221 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:07.221 "adrfam": "ipv4", 00:25:07.221 "trsvcid": "8009", 00:25:07.221 "wait_for_attach": true, 00:25:07.221 "method": "bdev_nvme_start_discovery", 00:25:07.221 "req_id": 1 00:25:07.221 } 00:25:07.221 Got JSON-RPC error response 00:25:07.221 response: 00:25:07.221 { 00:25:07.221 "code": -17, 00:25:07.221 "message": "File exists" 00:25:07.221 } 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.221 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.222 request: 00:25:07.222 { 00:25:07.222 "name": "nvme_second", 00:25:07.222 "trtype": "tcp", 00:25:07.222 "traddr": "10.0.0.2", 00:25:07.222 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:07.222 "adrfam": "ipv4", 00:25:07.222 "trsvcid": "8009", 00:25:07.222 "wait_for_attach": true, 00:25:07.222 "method": "bdev_nvme_start_discovery", 00:25:07.222 "req_id": 1 00:25:07.222 } 00:25:07.222 Got JSON-RPC error response 00:25:07.222 response: 00:25:07.222 { 00:25:07.222 "code": -17, 00:25:07.222 "message": "File exists" 00:25:07.222 } 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:07.222 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.480 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.481 01:27:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.481 01:27:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.481 01:27:43 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.417 [2024-05-15 01:27:44.022394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.417 [2024-05-15 01:27:44.022764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:08.417 [2024-05-15 01:27:44.022785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ec980 with addr=10.0.0.2, port=8010 00:25:08.417 [2024-05-15 01:27:44.022798] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:08.417 [2024-05-15 01:27:44.022807] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:08.417 [2024-05-15 01:27:44.022816] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:09.353 [2024-05-15 01:27:45.024724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.353 [2024-05-15 01:27:45.025195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:09.353 [2024-05-15 01:27:45.025208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f31840 with addr=10.0.0.2, port=8010 00:25:09.353 [2024-05-15 01:27:45.025236] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:09.353 [2024-05-15 01:27:45.025244] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:09.353 [2024-05-15 01:27:45.025252] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:10.731 [2024-05-15 01:27:46.026780] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:10.731 request: 00:25:10.731 { 00:25:10.731 "name": "nvme_second", 00:25:10.731 "trtype": "tcp", 00:25:10.731 "traddr": "10.0.0.2", 00:25:10.731 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:10.731 "adrfam": "ipv4", 00:25:10.731 "trsvcid": "8010", 00:25:10.731 "attach_timeout_ms": 3000, 00:25:10.731 "method": "bdev_nvme_start_discovery", 00:25:10.731 "req_id": 1 00:25:10.731 } 00:25:10.731 Got JSON-RPC error response 00:25:10.731 response: 00:25:10.731 { 00:25:10.731 "code": -110, 00:25:10.731 "message": "Connection timed out" 00:25:10.731 } 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 21740 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:10.731 rmmod nvme_tcp 00:25:10.731 rmmod nvme_fabrics 00:25:10.731 rmmod nvme_keyring 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 21704 ']' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 21704 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 21704 ']' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 21704 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 21704 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 21704' 00:25:10.731 killing process with pid 21704 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 21704 00:25:10.731 [2024-05-15 01:27:46.204051] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 21704 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.731 01:27:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.267 00:25:13.267 real 0m19.299s 00:25:13.267 user 0m22.609s 00:25:13.267 sys 0m7.115s 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.267 ************************************ 00:25:13.267 END TEST nvmf_host_discovery 00:25:13.267 ************************************ 00:25:13.267 01:27:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:13.267 01:27:48 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:13.267 01:27:48 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:13.267 01:27:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:13.267 ************************************ 00:25:13.267 START TEST nvmf_host_multipath_status 00:25:13.267 ************************************ 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:13.267 * Looking for test storage... 00:25:13.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.267 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:13.268 01:27:48 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:19.838 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:19.839 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:19.839 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:19.839 Found net devices under 0000:af:00.0: cvl_0_0 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:19.839 Found net devices under 0000:af:00.1: cvl_0_1 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:19.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:25:19.839 00:25:19.839 --- 10.0.0.2 ping statistics --- 00:25:19.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.839 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:19.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:25:19.839 00:25:19.839 --- 10.0.0.1 ping statistics --- 00:25:19.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.839 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=27192 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 27192 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 27192 ']' 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:19.839 01:27:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:19.839 [2024-05-15 01:27:55.487592] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:25:19.839 [2024-05-15 01:27:55.487640] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.839 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.099 [2024-05-15 01:27:55.562370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:20.099 [2024-05-15 01:27:55.635232] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.099 [2024-05-15 01:27:55.635266] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.099 [2024-05-15 01:27:55.635275] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.099 [2024-05-15 01:27:55.635283] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.099 [2024-05-15 01:27:55.635292] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.099 [2024-05-15 01:27:55.635385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.099 [2024-05-15 01:27:55.635389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.667 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:20.667 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:20.667 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.667 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.667 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.667 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.668 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=27192 00:25:20.668 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:20.927 [2024-05-15 01:27:56.480291] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.927 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:21.186 Malloc0 00:25:21.187 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:21.187 01:27:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.446 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.705 [2024-05-15 01:27:57.165621] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:21.705 [2024-05-15 01:27:57.165856] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.705 [2024-05-15 01:27:57.322210] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=27488 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 27488 /var/tmp/bdevperf.sock 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 27488 ']' 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:21.705 01:27:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:22.643 01:27:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:22.643 01:27:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:25:22.643 01:27:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:22.904 01:27:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:23.163 Nvme0n1 00:25:23.163 01:27:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:23.731 Nvme0n1 00:25:23.731 01:27:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:23.731 01:27:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:25.638 01:28:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:25.638 01:28:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:25.897 01:28:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:25.897 01:28:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.275 01:28:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.534 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.534 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.534 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.534 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.793 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:28.052 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:28.052 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:28.052 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:28.312 01:28:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:28.571 01:28:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:29.508 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:29.508 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:29.508 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.508 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.768 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:30.027 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.027 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:30.027 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.027 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.287 01:28:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.546 01:28:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.546 01:28:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:30.546 01:28:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:30.804 01:28:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:30.804 01:28:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:32.182 01:28:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.442 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.442 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:32.442 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.442 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:32.701 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.701 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:32.702 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:32.702 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:32.961 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:33.221 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:33.479 01:28:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:34.417 01:28:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:34.417 01:28:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:34.417 01:28:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.417 01:28:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:34.675 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.934 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.934 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:34.934 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.934 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:35.193 01:28:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:35.452 01:28:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:35.452 01:28:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:35.452 01:28:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:35.711 01:28:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:35.711 01:28:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.085 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.344 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.344 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.344 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.344 01:28:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:37.604 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.922 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.922 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:37.922 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:38.180 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:38.180 01:28:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:39.557 01:28:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:39.557 01:28:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:39.557 01:28:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.557 01:28:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:39.557 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:39.557 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:39.557 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.557 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:39.557 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.557 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:39.558 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.558 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:39.817 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:39.817 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:39.817 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:39.817 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.077 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:40.336 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.336 01:28:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:40.594 01:28:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:40.594 01:28:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:40.595 01:28:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.853 01:28:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:41.789 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:41.789 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:41.789 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.789 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:42.047 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.047 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:42.047 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:42.047 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.306 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.306 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:42.306 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.306 01:28:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.564 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.822 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.822 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:42.822 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.822 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:43.081 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:43.081 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:43.081 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:43.081 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:43.341 01:28:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:44.277 01:28:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:44.277 01:28:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:44.277 01:28:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.277 01:28:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.536 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.536 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:44.536 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.536 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:44.794 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.794 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:44.795 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.795 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:44.795 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:44.795 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:44.795 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.795 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.054 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.054 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:45.054 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.054 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.312 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.312 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:45.312 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.312 01:28:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:45.571 01:28:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.571 01:28:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:45.571 01:28:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:45.571 01:28:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:45.830 01:28:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:46.767 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:46.767 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:46.767 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:46.767 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:47.025 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.025 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:47.025 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.025 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.284 01:28:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:47.543 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.543 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:47.543 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.543 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:47.804 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:47.804 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:47.805 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:47.805 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:48.063 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:48.063 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:48.063 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:48.063 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:48.322 01:28:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:49.256 01:28:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:49.256 01:28:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:49.256 01:28:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.256 01:28:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:49.515 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.515 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:49.515 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.515 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.774 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:50.034 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.034 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:50.034 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.034 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 27488 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 27488 ']' 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 27488 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:50.293 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 27488 00:25:50.556 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:50.556 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:50.556 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 27488' 00:25:50.556 killing process with pid 27488 00:25:50.556 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 27488 00:25:50.556 01:28:25 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 27488 00:25:50.556 Connection closed with partial response: 00:25:50.556 00:25:50.556 00:25:50.556 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 27488 00:25:50.556 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.556 [2024-05-15 01:27:57.383207] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:25:50.556 [2024-05-15 01:27:57.383260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid27488 ] 00:25:50.556 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.556 [2024-05-15 01:27:57.448112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.556 [2024-05-15 01:27:57.518712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.556 Running I/O for 90 seconds... 00:25:50.556 [2024-05-15 01:28:11.193720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.193986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.193996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.194984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.194999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.556 [2024-05-15 01:28:11.195559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:50.556 [2024-05-15 01:28:11.195725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.556 [2024-05-15 01:28:11.195735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.195948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.195958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.557 [2024-05-15 01:28:11.196710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.196975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.196984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.557 [2024-05-15 01:28:11.197548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:50.557 [2024-05-15 01:28:11.197571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:11.197580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:11.197600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:11.197610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:11.197630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:11.197639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:11.197659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:11.197669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:11.197689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:11.197698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:11.197718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:11.197728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:11.197748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:11.197757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.835155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.835199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.837459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:35120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.837485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.837504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.837515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.837762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.837774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.837789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.837799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.837819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.837828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.837843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.837852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:35368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:35400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:35416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:35192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.838623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.838646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:35248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.558 [2024-05-15 01:28:23.838671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:35480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:35528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:50.558 [2024-05-15 01:28:23.838805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:35544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.558 [2024-05-15 01:28:23.838814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:50.558 Received shutdown signal, test time was about 26.678615 seconds 00:25:50.558 00:25:50.558 Latency(us) 00:25:50.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.558 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:50.558 Verification LBA range: start 0x0 length 0x4000 00:25:50.558 Nvme0n1 : 26.68 11007.13 43.00 0.00 0.00 11608.71 501.35 3019898.88 00:25:50.558 =================================================================================================================== 00:25:50.558 Total : 11007.13 43.00 0.00 0.00 11608.71 501.35 3019898.88 00:25:50.558 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:50.817 rmmod nvme_tcp 00:25:50.817 rmmod nvme_fabrics 00:25:50.817 rmmod nvme_keyring 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 27192 ']' 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 27192 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 27192 ']' 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 27192 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:50.817 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 27192 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 27192' 00:25:51.111 killing process with pid 27192 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 27192 00:25:51.111 [2024-05-15 01:28:26.534140] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 27192 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:51.111 01:28:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.671 01:28:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.671 00:25:53.671 real 0m40.270s 00:25:53.671 user 1m42.314s 00:25:53.671 sys 0m14.328s 00:25:53.671 01:28:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:53.671 01:28:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:53.671 ************************************ 00:25:53.671 END TEST nvmf_host_multipath_status 00:25:53.671 ************************************ 00:25:53.671 01:28:28 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:53.671 01:28:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:53.671 01:28:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:53.671 01:28:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.671 ************************************ 00:25:53.671 START TEST nvmf_discovery_remove_ifc 00:25:53.671 ************************************ 00:25:53.671 01:28:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:53.671 * Looking for test storage... 00:25:53.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.671 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.672 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.672 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:53.672 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:53.672 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:25:53.672 01:28:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:00.248 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:00.248 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:00.248 Found net devices under 0000:af:00.0: cvl_0_0 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:00.248 Found net devices under 0000:af:00.1: cvl_0_1 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:00.248 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.508 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.508 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.508 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:00.508 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.508 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:26:00.508 00:26:00.508 --- 10.0.0.2 ping statistics --- 00:26:00.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.508 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:26:00.508 01:28:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.508 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.508 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:26:00.508 00:26:00.508 --- 10.0.0.1 ping statistics --- 00:26:00.508 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.508 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=36283 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 36283 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 36283 ']' 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:00.508 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:00.508 [2024-05-15 01:28:36.093126] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:26:00.508 [2024-05-15 01:28:36.093177] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.508 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.508 [2024-05-15 01:28:36.168184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.768 [2024-05-15 01:28:36.242015] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.768 [2024-05-15 01:28:36.242050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.768 [2024-05-15 01:28:36.242059] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.768 [2024-05-15 01:28:36.242068] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.768 [2024-05-15 01:28:36.242090] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.768 [2024-05-15 01:28:36.242111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:01.337 01:28:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.337 [2024-05-15 01:28:36.955877] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.337 [2024-05-15 01:28:36.963858] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:01.337 [2024-05-15 01:28:36.964061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:01.337 null0 00:26:01.337 [2024-05-15 01:28:36.996045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=36447 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 36447 /tmp/host.sock 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 36447 ']' 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:01.337 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:01.337 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:01.596 [2024-05-15 01:28:37.059509] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:26:01.596 [2024-05-15 01:28:37.059552] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid36447 ] 00:26:01.596 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.596 [2024-05-15 01:28:37.128798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.596 [2024-05-15 01:28:37.204174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:02.533 01:28:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.470 [2024-05-15 01:28:39.005009] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:03.470 [2024-05-15 01:28:39.005034] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:03.470 [2024-05-15 01:28:39.005048] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.470 [2024-05-15 01:28:39.134441] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:03.729 [2024-05-15 01:28:39.194740] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:03.729 [2024-05-15 01:28:39.194783] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:03.729 [2024-05-15 01:28:39.194805] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:03.729 [2024-05-15 01:28:39.194819] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:03.729 [2024-05-15 01:28:39.194837] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.729 [2024-05-15 01:28:39.204270] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc8f7a0 was disconnected and freed. delete nvme_qpair. 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:03.729 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:03.988 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:03.988 01:28:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:04.938 01:28:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:05.874 01:28:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:07.253 01:28:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:08.190 01:28:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:09.128 [2024-05-15 01:28:44.635790] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:09.128 [2024-05-15 01:28:44.635833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.128 [2024-05-15 01:28:44.635862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.128 [2024-05-15 01:28:44.635873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.128 [2024-05-15 01:28:44.635883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.128 [2024-05-15 01:28:44.635893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.128 [2024-05-15 01:28:44.635903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.128 [2024-05-15 01:28:44.635912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.128 [2024-05-15 01:28:44.635922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.128 [2024-05-15 01:28:44.635932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:09.128 [2024-05-15 01:28:44.635941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:09.128 [2024-05-15 01:28:44.635950] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc568d0 is same with the state(5) to be set 00:26:09.128 [2024-05-15 01:28:44.645811] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc568d0 (9): Bad file descriptor 00:26:09.128 [2024-05-15 01:28:44.655853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:09.128 01:28:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:10.064 [2024-05-15 01:28:45.703209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:11.042 [2024-05-15 01:28:46.727263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:11.042 [2024-05-15 01:28:46.727310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc568d0 with addr=10.0.0.2, port=4420 00:26:11.042 [2024-05-15 01:28:46.727329] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc568d0 is same with the state(5) to be set 00:26:11.042 [2024-05-15 01:28:46.727712] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc568d0 (9): Bad file descriptor 00:26:11.042 [2024-05-15 01:28:46.727745] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.042 [2024-05-15 01:28:46.727772] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:11.042 [2024-05-15 01:28:46.727800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.042 [2024-05-15 01:28:46.727821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.042 [2024-05-15 01:28:46.727836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.042 [2024-05-15 01:28:46.727849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.042 [2024-05-15 01:28:46.727863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.042 [2024-05-15 01:28:46.727875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.042 [2024-05-15 01:28:46.727889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.042 [2024-05-15 01:28:46.727901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.042 [2024-05-15 01:28:46.727915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.042 [2024-05-15 01:28:46.727928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.042 [2024-05-15 01:28:46.727941] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:11.042 [2024-05-15 01:28:46.728359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc55d60 (9): Bad file descriptor 00:26:11.042 [2024-05-15 01:28:46.729379] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:11.042 [2024-05-15 01:28:46.729397] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:11.301 01:28:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:11.301 01:28:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:11.301 01:28:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:12.238 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:12.497 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:12.497 01:28:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.435 [2024-05-15 01:28:48.779979] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:13.435 [2024-05-15 01:28:48.779999] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:13.435 [2024-05-15 01:28:48.780013] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:13.435 [2024-05-15 01:28:48.909412] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:13.435 01:28:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:13.435 01:28:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:13.435 01:28:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:13.694 [2024-05-15 01:28:49.134478] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:13.694 [2024-05-15 01:28:49.134512] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:13.694 [2024-05-15 01:28:49.134530] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:13.694 [2024-05-15 01:28:49.134544] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:13.694 [2024-05-15 01:28:49.134553] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:13.694 [2024-05-15 01:28:49.139561] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc99e50 was disconnected and freed. delete nvme_qpair. 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 36447 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 36447 ']' 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 36447 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 36447 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 36447' 00:26:14.631 killing process with pid 36447 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 36447 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 36447 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:14.631 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:14.631 rmmod nvme_tcp 00:26:14.890 rmmod nvme_fabrics 00:26:14.890 rmmod nvme_keyring 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 36283 ']' 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 36283 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 36283 ']' 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 36283 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 36283 00:26:14.890 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:14.891 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:14.891 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 36283' 00:26:14.891 killing process with pid 36283 00:26:14.891 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 36283 00:26:14.891 [2024-05-15 01:28:50.437358] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:14.891 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 36283 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.150 01:28:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.056 01:28:52 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.056 00:26:17.056 real 0m23.793s 00:26:17.056 user 0m27.381s 00:26:17.056 sys 0m7.499s 00:26:17.056 01:28:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:17.057 01:28:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.057 ************************************ 00:26:17.057 END TEST nvmf_discovery_remove_ifc 00:26:17.057 ************************************ 00:26:17.316 01:28:52 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:17.316 01:28:52 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:17.316 01:28:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:17.316 01:28:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:17.316 ************************************ 00:26:17.316 START TEST nvmf_identify_kernel_target 00:26:17.316 ************************************ 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:17.316 * Looking for test storage... 00:26:17.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.316 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:17.317 01:28:52 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:23.889 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:23.890 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:23.890 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:23.890 Found net devices under 0000:af:00.0: cvl_0_0 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:23.890 Found net devices under 0000:af:00.1: cvl_0_1 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:23.890 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:24.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:26:24.150 00:26:24.150 --- 10.0.0.2 ping statistics --- 00:26:24.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.150 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:24.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:26:24.150 00:26:24.150 --- 10.0.0.1 ping statistics --- 00:26:24.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.150 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:24.150 01:28:59 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:27.443 Waiting for block devices as requested 00:26:27.443 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:27.443 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:27.443 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:27.702 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:27.702 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:27.702 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:27.702 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:27.960 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:27.960 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:27.960 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:28.219 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:28.219 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:28.219 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:28.478 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:28.478 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:28.478 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:28.737 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:28.737 No valid GPT data, bailing 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:28.737 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:28.998 00:26:28.998 Discovery Log Number of Records 2, Generation counter 2 00:26:28.998 =====Discovery Log Entry 0====== 00:26:28.998 trtype: tcp 00:26:28.998 adrfam: ipv4 00:26:28.998 subtype: current discovery subsystem 00:26:28.998 treq: not specified, sq flow control disable supported 00:26:28.998 portid: 1 00:26:28.998 trsvcid: 4420 00:26:28.998 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:28.998 traddr: 10.0.0.1 00:26:28.998 eflags: none 00:26:28.998 sectype: none 00:26:28.998 =====Discovery Log Entry 1====== 00:26:28.998 trtype: tcp 00:26:28.998 adrfam: ipv4 00:26:28.998 subtype: nvme subsystem 00:26:28.998 treq: not specified, sq flow control disable supported 00:26:28.998 portid: 1 00:26:28.998 trsvcid: 4420 00:26:28.998 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:28.998 traddr: 10.0.0.1 00:26:28.998 eflags: none 00:26:28.998 sectype: none 00:26:28.998 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:28.998 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:28.998 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.998 ===================================================== 00:26:28.998 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:28.998 ===================================================== 00:26:28.998 Controller Capabilities/Features 00:26:28.998 ================================ 00:26:28.998 Vendor ID: 0000 00:26:28.998 Subsystem Vendor ID: 0000 00:26:28.998 Serial Number: 477d4ca887edf5a30f50 00:26:28.998 Model Number: Linux 00:26:28.998 Firmware Version: 6.7.0-68 00:26:28.998 Recommended Arb Burst: 0 00:26:28.998 IEEE OUI Identifier: 00 00 00 00:26:28.998 Multi-path I/O 00:26:28.998 May have multiple subsystem ports: No 00:26:28.998 May have multiple controllers: No 00:26:28.998 Associated with SR-IOV VF: No 00:26:28.998 Max Data Transfer Size: Unlimited 00:26:28.998 Max Number of Namespaces: 0 00:26:28.998 Max Number of I/O Queues: 1024 00:26:28.998 NVMe Specification Version (VS): 1.3 00:26:28.998 NVMe Specification Version (Identify): 1.3 00:26:28.998 Maximum Queue Entries: 1024 00:26:28.998 Contiguous Queues Required: No 00:26:28.998 Arbitration Mechanisms Supported 00:26:28.998 Weighted Round Robin: Not Supported 00:26:28.998 Vendor Specific: Not Supported 00:26:28.998 Reset Timeout: 7500 ms 00:26:28.998 Doorbell Stride: 4 bytes 00:26:28.998 NVM Subsystem Reset: Not Supported 00:26:28.998 Command Sets Supported 00:26:28.998 NVM Command Set: Supported 00:26:28.998 Boot Partition: Not Supported 00:26:28.998 Memory Page Size Minimum: 4096 bytes 00:26:28.998 Memory Page Size Maximum: 4096 bytes 00:26:28.998 Persistent Memory Region: Not Supported 00:26:28.998 Optional Asynchronous Events Supported 00:26:28.998 Namespace Attribute Notices: Not Supported 00:26:28.998 Firmware Activation Notices: Not Supported 00:26:28.998 ANA Change Notices: Not Supported 00:26:28.998 PLE Aggregate Log Change Notices: Not Supported 00:26:28.998 LBA Status Info Alert Notices: Not Supported 00:26:28.998 EGE Aggregate Log Change Notices: Not Supported 00:26:28.998 Normal NVM Subsystem Shutdown event: Not Supported 00:26:28.998 Zone Descriptor Change Notices: Not Supported 00:26:28.998 Discovery Log Change Notices: Supported 00:26:28.998 Controller Attributes 00:26:28.998 128-bit Host Identifier: Not Supported 00:26:28.998 Non-Operational Permissive Mode: Not Supported 00:26:28.998 NVM Sets: Not Supported 00:26:28.998 Read Recovery Levels: Not Supported 00:26:28.998 Endurance Groups: Not Supported 00:26:28.998 Predictable Latency Mode: Not Supported 00:26:28.998 Traffic Based Keep ALive: Not Supported 00:26:28.998 Namespace Granularity: Not Supported 00:26:28.998 SQ Associations: Not Supported 00:26:28.998 UUID List: Not Supported 00:26:28.998 Multi-Domain Subsystem: Not Supported 00:26:28.998 Fixed Capacity Management: Not Supported 00:26:28.998 Variable Capacity Management: Not Supported 00:26:28.998 Delete Endurance Group: Not Supported 00:26:28.998 Delete NVM Set: Not Supported 00:26:28.998 Extended LBA Formats Supported: Not Supported 00:26:28.998 Flexible Data Placement Supported: Not Supported 00:26:28.998 00:26:28.998 Controller Memory Buffer Support 00:26:28.998 ================================ 00:26:28.998 Supported: No 00:26:28.998 00:26:28.998 Persistent Memory Region Support 00:26:28.998 ================================ 00:26:28.998 Supported: No 00:26:28.998 00:26:28.998 Admin Command Set Attributes 00:26:28.998 ============================ 00:26:28.998 Security Send/Receive: Not Supported 00:26:28.998 Format NVM: Not Supported 00:26:28.998 Firmware Activate/Download: Not Supported 00:26:28.998 Namespace Management: Not Supported 00:26:28.998 Device Self-Test: Not Supported 00:26:28.998 Directives: Not Supported 00:26:28.998 NVMe-MI: Not Supported 00:26:28.998 Virtualization Management: Not Supported 00:26:28.998 Doorbell Buffer Config: Not Supported 00:26:28.998 Get LBA Status Capability: Not Supported 00:26:28.998 Command & Feature Lockdown Capability: Not Supported 00:26:28.998 Abort Command Limit: 1 00:26:28.998 Async Event Request Limit: 1 00:26:28.998 Number of Firmware Slots: N/A 00:26:28.998 Firmware Slot 1 Read-Only: N/A 00:26:28.998 Firmware Activation Without Reset: N/A 00:26:28.998 Multiple Update Detection Support: N/A 00:26:28.998 Firmware Update Granularity: No Information Provided 00:26:28.998 Per-Namespace SMART Log: No 00:26:28.998 Asymmetric Namespace Access Log Page: Not Supported 00:26:28.998 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:28.998 Command Effects Log Page: Not Supported 00:26:28.998 Get Log Page Extended Data: Supported 00:26:28.998 Telemetry Log Pages: Not Supported 00:26:28.998 Persistent Event Log Pages: Not Supported 00:26:28.998 Supported Log Pages Log Page: May Support 00:26:28.998 Commands Supported & Effects Log Page: Not Supported 00:26:28.998 Feature Identifiers & Effects Log Page:May Support 00:26:28.998 NVMe-MI Commands & Effects Log Page: May Support 00:26:28.998 Data Area 4 for Telemetry Log: Not Supported 00:26:28.998 Error Log Page Entries Supported: 1 00:26:28.998 Keep Alive: Not Supported 00:26:28.998 00:26:28.998 NVM Command Set Attributes 00:26:28.998 ========================== 00:26:28.998 Submission Queue Entry Size 00:26:28.998 Max: 1 00:26:28.998 Min: 1 00:26:28.998 Completion Queue Entry Size 00:26:28.998 Max: 1 00:26:28.998 Min: 1 00:26:28.998 Number of Namespaces: 0 00:26:28.998 Compare Command: Not Supported 00:26:28.998 Write Uncorrectable Command: Not Supported 00:26:28.998 Dataset Management Command: Not Supported 00:26:28.998 Write Zeroes Command: Not Supported 00:26:28.998 Set Features Save Field: Not Supported 00:26:28.998 Reservations: Not Supported 00:26:28.998 Timestamp: Not Supported 00:26:28.998 Copy: Not Supported 00:26:28.998 Volatile Write Cache: Not Present 00:26:28.998 Atomic Write Unit (Normal): 1 00:26:28.998 Atomic Write Unit (PFail): 1 00:26:28.998 Atomic Compare & Write Unit: 1 00:26:28.998 Fused Compare & Write: Not Supported 00:26:28.999 Scatter-Gather List 00:26:28.999 SGL Command Set: Supported 00:26:28.999 SGL Keyed: Not Supported 00:26:28.999 SGL Bit Bucket Descriptor: Not Supported 00:26:28.999 SGL Metadata Pointer: Not Supported 00:26:28.999 Oversized SGL: Not Supported 00:26:28.999 SGL Metadata Address: Not Supported 00:26:28.999 SGL Offset: Supported 00:26:28.999 Transport SGL Data Block: Not Supported 00:26:28.999 Replay Protected Memory Block: Not Supported 00:26:28.999 00:26:28.999 Firmware Slot Information 00:26:28.999 ========================= 00:26:28.999 Active slot: 0 00:26:28.999 00:26:28.999 00:26:28.999 Error Log 00:26:28.999 ========= 00:26:28.999 00:26:28.999 Active Namespaces 00:26:28.999 ================= 00:26:28.999 Discovery Log Page 00:26:28.999 ================== 00:26:28.999 Generation Counter: 2 00:26:28.999 Number of Records: 2 00:26:28.999 Record Format: 0 00:26:28.999 00:26:28.999 Discovery Log Entry 0 00:26:28.999 ---------------------- 00:26:28.999 Transport Type: 3 (TCP) 00:26:28.999 Address Family: 1 (IPv4) 00:26:28.999 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:28.999 Entry Flags: 00:26:28.999 Duplicate Returned Information: 0 00:26:28.999 Explicit Persistent Connection Support for Discovery: 0 00:26:28.999 Transport Requirements: 00:26:28.999 Secure Channel: Not Specified 00:26:28.999 Port ID: 1 (0x0001) 00:26:28.999 Controller ID: 65535 (0xffff) 00:26:28.999 Admin Max SQ Size: 32 00:26:28.999 Transport Service Identifier: 4420 00:26:28.999 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:28.999 Transport Address: 10.0.0.1 00:26:28.999 Discovery Log Entry 1 00:26:28.999 ---------------------- 00:26:28.999 Transport Type: 3 (TCP) 00:26:28.999 Address Family: 1 (IPv4) 00:26:28.999 Subsystem Type: 2 (NVM Subsystem) 00:26:28.999 Entry Flags: 00:26:28.999 Duplicate Returned Information: 0 00:26:28.999 Explicit Persistent Connection Support for Discovery: 0 00:26:28.999 Transport Requirements: 00:26:28.999 Secure Channel: Not Specified 00:26:28.999 Port ID: 1 (0x0001) 00:26:28.999 Controller ID: 65535 (0xffff) 00:26:28.999 Admin Max SQ Size: 32 00:26:28.999 Transport Service Identifier: 4420 00:26:28.999 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:28.999 Transport Address: 10.0.0.1 00:26:28.999 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:28.999 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.999 get_feature(0x01) failed 00:26:28.999 get_feature(0x02) failed 00:26:28.999 get_feature(0x04) failed 00:26:28.999 ===================================================== 00:26:28.999 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:28.999 ===================================================== 00:26:28.999 Controller Capabilities/Features 00:26:28.999 ================================ 00:26:28.999 Vendor ID: 0000 00:26:28.999 Subsystem Vendor ID: 0000 00:26:28.999 Serial Number: 0138c326cdc2a5eefce1 00:26:28.999 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:28.999 Firmware Version: 6.7.0-68 00:26:28.999 Recommended Arb Burst: 6 00:26:28.999 IEEE OUI Identifier: 00 00 00 00:26:28.999 Multi-path I/O 00:26:28.999 May have multiple subsystem ports: Yes 00:26:28.999 May have multiple controllers: Yes 00:26:28.999 Associated with SR-IOV VF: No 00:26:28.999 Max Data Transfer Size: Unlimited 00:26:28.999 Max Number of Namespaces: 1024 00:26:28.999 Max Number of I/O Queues: 128 00:26:28.999 NVMe Specification Version (VS): 1.3 00:26:28.999 NVMe Specification Version (Identify): 1.3 00:26:28.999 Maximum Queue Entries: 1024 00:26:28.999 Contiguous Queues Required: No 00:26:28.999 Arbitration Mechanisms Supported 00:26:28.999 Weighted Round Robin: Not Supported 00:26:28.999 Vendor Specific: Not Supported 00:26:28.999 Reset Timeout: 7500 ms 00:26:28.999 Doorbell Stride: 4 bytes 00:26:28.999 NVM Subsystem Reset: Not Supported 00:26:28.999 Command Sets Supported 00:26:28.999 NVM Command Set: Supported 00:26:28.999 Boot Partition: Not Supported 00:26:28.999 Memory Page Size Minimum: 4096 bytes 00:26:28.999 Memory Page Size Maximum: 4096 bytes 00:26:28.999 Persistent Memory Region: Not Supported 00:26:28.999 Optional Asynchronous Events Supported 00:26:28.999 Namespace Attribute Notices: Supported 00:26:28.999 Firmware Activation Notices: Not Supported 00:26:28.999 ANA Change Notices: Supported 00:26:28.999 PLE Aggregate Log Change Notices: Not Supported 00:26:28.999 LBA Status Info Alert Notices: Not Supported 00:26:28.999 EGE Aggregate Log Change Notices: Not Supported 00:26:28.999 Normal NVM Subsystem Shutdown event: Not Supported 00:26:28.999 Zone Descriptor Change Notices: Not Supported 00:26:28.999 Discovery Log Change Notices: Not Supported 00:26:28.999 Controller Attributes 00:26:28.999 128-bit Host Identifier: Supported 00:26:28.999 Non-Operational Permissive Mode: Not Supported 00:26:28.999 NVM Sets: Not Supported 00:26:28.999 Read Recovery Levels: Not Supported 00:26:28.999 Endurance Groups: Not Supported 00:26:28.999 Predictable Latency Mode: Not Supported 00:26:28.999 Traffic Based Keep ALive: Supported 00:26:28.999 Namespace Granularity: Not Supported 00:26:28.999 SQ Associations: Not Supported 00:26:28.999 UUID List: Not Supported 00:26:28.999 Multi-Domain Subsystem: Not Supported 00:26:28.999 Fixed Capacity Management: Not Supported 00:26:28.999 Variable Capacity Management: Not Supported 00:26:28.999 Delete Endurance Group: Not Supported 00:26:28.999 Delete NVM Set: Not Supported 00:26:28.999 Extended LBA Formats Supported: Not Supported 00:26:28.999 Flexible Data Placement Supported: Not Supported 00:26:28.999 00:26:28.999 Controller Memory Buffer Support 00:26:28.999 ================================ 00:26:28.999 Supported: No 00:26:28.999 00:26:28.999 Persistent Memory Region Support 00:26:28.999 ================================ 00:26:28.999 Supported: No 00:26:28.999 00:26:28.999 Admin Command Set Attributes 00:26:28.999 ============================ 00:26:28.999 Security Send/Receive: Not Supported 00:26:28.999 Format NVM: Not Supported 00:26:28.999 Firmware Activate/Download: Not Supported 00:26:28.999 Namespace Management: Not Supported 00:26:28.999 Device Self-Test: Not Supported 00:26:28.999 Directives: Not Supported 00:26:28.999 NVMe-MI: Not Supported 00:26:28.999 Virtualization Management: Not Supported 00:26:28.999 Doorbell Buffer Config: Not Supported 00:26:28.999 Get LBA Status Capability: Not Supported 00:26:28.999 Command & Feature Lockdown Capability: Not Supported 00:26:28.999 Abort Command Limit: 4 00:26:28.999 Async Event Request Limit: 4 00:26:28.999 Number of Firmware Slots: N/A 00:26:28.999 Firmware Slot 1 Read-Only: N/A 00:26:28.999 Firmware Activation Without Reset: N/A 00:26:28.999 Multiple Update Detection Support: N/A 00:26:28.999 Firmware Update Granularity: No Information Provided 00:26:28.999 Per-Namespace SMART Log: Yes 00:26:28.999 Asymmetric Namespace Access Log Page: Supported 00:26:28.999 ANA Transition Time : 10 sec 00:26:28.999 00:26:28.999 Asymmetric Namespace Access Capabilities 00:26:28.999 ANA Optimized State : Supported 00:26:28.999 ANA Non-Optimized State : Supported 00:26:28.999 ANA Inaccessible State : Supported 00:26:28.999 ANA Persistent Loss State : Supported 00:26:28.999 ANA Change State : Supported 00:26:28.999 ANAGRPID is not changed : No 00:26:28.999 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:28.999 00:26:28.999 ANA Group Identifier Maximum : 128 00:26:28.999 Number of ANA Group Identifiers : 128 00:26:28.999 Max Number of Allowed Namespaces : 1024 00:26:28.999 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:28.999 Command Effects Log Page: Supported 00:26:28.999 Get Log Page Extended Data: Supported 00:26:28.999 Telemetry Log Pages: Not Supported 00:26:28.999 Persistent Event Log Pages: Not Supported 00:26:28.999 Supported Log Pages Log Page: May Support 00:26:28.999 Commands Supported & Effects Log Page: Not Supported 00:26:28.999 Feature Identifiers & Effects Log Page:May Support 00:26:28.999 NVMe-MI Commands & Effects Log Page: May Support 00:26:28.999 Data Area 4 for Telemetry Log: Not Supported 00:26:28.999 Error Log Page Entries Supported: 128 00:26:28.999 Keep Alive: Supported 00:26:29.000 Keep Alive Granularity: 1000 ms 00:26:29.000 00:26:29.000 NVM Command Set Attributes 00:26:29.000 ========================== 00:26:29.000 Submission Queue Entry Size 00:26:29.000 Max: 64 00:26:29.000 Min: 64 00:26:29.000 Completion Queue Entry Size 00:26:29.000 Max: 16 00:26:29.000 Min: 16 00:26:29.000 Number of Namespaces: 1024 00:26:29.000 Compare Command: Not Supported 00:26:29.000 Write Uncorrectable Command: Not Supported 00:26:29.000 Dataset Management Command: Supported 00:26:29.000 Write Zeroes Command: Supported 00:26:29.000 Set Features Save Field: Not Supported 00:26:29.000 Reservations: Not Supported 00:26:29.000 Timestamp: Not Supported 00:26:29.000 Copy: Not Supported 00:26:29.000 Volatile Write Cache: Present 00:26:29.000 Atomic Write Unit (Normal): 1 00:26:29.000 Atomic Write Unit (PFail): 1 00:26:29.000 Atomic Compare & Write Unit: 1 00:26:29.000 Fused Compare & Write: Not Supported 00:26:29.000 Scatter-Gather List 00:26:29.000 SGL Command Set: Supported 00:26:29.000 SGL Keyed: Not Supported 00:26:29.000 SGL Bit Bucket Descriptor: Not Supported 00:26:29.000 SGL Metadata Pointer: Not Supported 00:26:29.000 Oversized SGL: Not Supported 00:26:29.000 SGL Metadata Address: Not Supported 00:26:29.000 SGL Offset: Supported 00:26:29.000 Transport SGL Data Block: Not Supported 00:26:29.000 Replay Protected Memory Block: Not Supported 00:26:29.000 00:26:29.000 Firmware Slot Information 00:26:29.000 ========================= 00:26:29.000 Active slot: 0 00:26:29.000 00:26:29.000 Asymmetric Namespace Access 00:26:29.000 =========================== 00:26:29.000 Change Count : 0 00:26:29.000 Number of ANA Group Descriptors : 1 00:26:29.000 ANA Group Descriptor : 0 00:26:29.000 ANA Group ID : 1 00:26:29.000 Number of NSID Values : 1 00:26:29.000 Change Count : 0 00:26:29.000 ANA State : 1 00:26:29.000 Namespace Identifier : 1 00:26:29.000 00:26:29.000 Commands Supported and Effects 00:26:29.000 ============================== 00:26:29.000 Admin Commands 00:26:29.000 -------------- 00:26:29.000 Get Log Page (02h): Supported 00:26:29.000 Identify (06h): Supported 00:26:29.000 Abort (08h): Supported 00:26:29.000 Set Features (09h): Supported 00:26:29.000 Get Features (0Ah): Supported 00:26:29.000 Asynchronous Event Request (0Ch): Supported 00:26:29.000 Keep Alive (18h): Supported 00:26:29.000 I/O Commands 00:26:29.000 ------------ 00:26:29.000 Flush (00h): Supported 00:26:29.000 Write (01h): Supported LBA-Change 00:26:29.000 Read (02h): Supported 00:26:29.000 Write Zeroes (08h): Supported LBA-Change 00:26:29.000 Dataset Management (09h): Supported 00:26:29.000 00:26:29.000 Error Log 00:26:29.000 ========= 00:26:29.000 Entry: 0 00:26:29.000 Error Count: 0x3 00:26:29.000 Submission Queue Id: 0x0 00:26:29.000 Command Id: 0x5 00:26:29.000 Phase Bit: 0 00:26:29.000 Status Code: 0x2 00:26:29.000 Status Code Type: 0x0 00:26:29.000 Do Not Retry: 1 00:26:29.000 Error Location: 0x28 00:26:29.000 LBA: 0x0 00:26:29.000 Namespace: 0x0 00:26:29.000 Vendor Log Page: 0x0 00:26:29.000 ----------- 00:26:29.000 Entry: 1 00:26:29.000 Error Count: 0x2 00:26:29.000 Submission Queue Id: 0x0 00:26:29.000 Command Id: 0x5 00:26:29.000 Phase Bit: 0 00:26:29.000 Status Code: 0x2 00:26:29.000 Status Code Type: 0x0 00:26:29.000 Do Not Retry: 1 00:26:29.000 Error Location: 0x28 00:26:29.000 LBA: 0x0 00:26:29.000 Namespace: 0x0 00:26:29.000 Vendor Log Page: 0x0 00:26:29.000 ----------- 00:26:29.000 Entry: 2 00:26:29.000 Error Count: 0x1 00:26:29.000 Submission Queue Id: 0x0 00:26:29.000 Command Id: 0x4 00:26:29.000 Phase Bit: 0 00:26:29.000 Status Code: 0x2 00:26:29.000 Status Code Type: 0x0 00:26:29.000 Do Not Retry: 1 00:26:29.000 Error Location: 0x28 00:26:29.000 LBA: 0x0 00:26:29.000 Namespace: 0x0 00:26:29.000 Vendor Log Page: 0x0 00:26:29.000 00:26:29.000 Number of Queues 00:26:29.000 ================ 00:26:29.000 Number of I/O Submission Queues: 128 00:26:29.000 Number of I/O Completion Queues: 128 00:26:29.000 00:26:29.000 ZNS Specific Controller Data 00:26:29.000 ============================ 00:26:29.000 Zone Append Size Limit: 0 00:26:29.000 00:26:29.000 00:26:29.000 Active Namespaces 00:26:29.000 ================= 00:26:29.000 get_feature(0x05) failed 00:26:29.000 Namespace ID:1 00:26:29.000 Command Set Identifier: NVM (00h) 00:26:29.000 Deallocate: Supported 00:26:29.000 Deallocated/Unwritten Error: Not Supported 00:26:29.000 Deallocated Read Value: Unknown 00:26:29.000 Deallocate in Write Zeroes: Not Supported 00:26:29.000 Deallocated Guard Field: 0xFFFF 00:26:29.000 Flush: Supported 00:26:29.000 Reservation: Not Supported 00:26:29.000 Namespace Sharing Capabilities: Multiple Controllers 00:26:29.000 Size (in LBAs): 3125627568 (1490GiB) 00:26:29.000 Capacity (in LBAs): 3125627568 (1490GiB) 00:26:29.000 Utilization (in LBAs): 3125627568 (1490GiB) 00:26:29.000 UUID: 6d502671-32ba-4b9b-8769-dff1b5639c27 00:26:29.000 Thin Provisioning: Not Supported 00:26:29.000 Per-NS Atomic Units: Yes 00:26:29.000 Atomic Boundary Size (Normal): 0 00:26:29.000 Atomic Boundary Size (PFail): 0 00:26:29.000 Atomic Boundary Offset: 0 00:26:29.000 NGUID/EUI64 Never Reused: No 00:26:29.000 ANA group ID: 1 00:26:29.000 Namespace Write Protected: No 00:26:29.000 Number of LBA Formats: 1 00:26:29.000 Current LBA Format: LBA Format #00 00:26:29.000 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:29.000 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.000 rmmod nvme_tcp 00:26:29.000 rmmod nvme_fabrics 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:29.000 01:29:04 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:31.600 01:29:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:34.895 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:34.895 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:34.896 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:34.896 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:34.896 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:34.896 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:34.896 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:34.896 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:36.274 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:26:36.275 00:26:36.275 real 0m19.126s 00:26:36.275 user 0m4.377s 00:26:36.275 sys 0m10.411s 00:26:36.275 01:29:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:36.275 01:29:11 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:36.275 ************************************ 00:26:36.275 END TEST nvmf_identify_kernel_target 00:26:36.275 ************************************ 00:26:36.534 01:29:11 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:36.534 01:29:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:36.534 01:29:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:36.534 01:29:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:36.534 ************************************ 00:26:36.534 START TEST nvmf_auth_host 00:26:36.534 ************************************ 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:36.534 * Looking for test storage... 00:26:36.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:36.534 01:29:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:43.103 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:43.103 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:43.103 Found net devices under 0000:af:00.0: cvl_0_0 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:43.103 Found net devices under 0000:af:00.1: cvl_0_1 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:43.103 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:43.104 01:29:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:43.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:26:43.104 00:26:43.104 --- 10.0.0.2 ping statistics --- 00:26:43.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.104 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:43.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:26:43.104 00:26:43.104 --- 10.0.0.1 ping statistics --- 00:26:43.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.104 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=48991 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 48991 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 48991 ']' 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.104 01:29:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c2cc9a04431c85f1e4a1d06472b3485 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Ikd 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c2cc9a04431c85f1e4a1d06472b3485 0 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c2cc9a04431c85f1e4a1d06472b3485 0 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c2cc9a04431c85f1e4a1d06472b3485 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Ikd 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Ikd 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ikd 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fd76918aec72e5d851e7087b9fce28fbbf0dd0e6c24b6814e3225d9e917fc309 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.qX5 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fd76918aec72e5d851e7087b9fce28fbbf0dd0e6c24b6814e3225d9e917fc309 3 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fd76918aec72e5d851e7087b9fce28fbbf0dd0e6c24b6814e3225d9e917fc309 3 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fd76918aec72e5d851e7087b9fce28fbbf0dd0e6c24b6814e3225d9e917fc309 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.qX5 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.qX5 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.qX5 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:43.672 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=34702be1a96121471dcd94ae2a0b14d6928ba67efc2cc56f 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.FoI 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 34702be1a96121471dcd94ae2a0b14d6928ba67efc2cc56f 0 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 34702be1a96121471dcd94ae2a0b14d6928ba67efc2cc56f 0 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=34702be1a96121471dcd94ae2a0b14d6928ba67efc2cc56f 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:43.673 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.FoI 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.FoI 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FoI 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1dffb220017ff2ffd13661e6fcbe6f5443d525e53a8bc820 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:43.931 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9kW 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1dffb220017ff2ffd13661e6fcbe6f5443d525e53a8bc820 2 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1dffb220017ff2ffd13661e6fcbe6f5443d525e53a8bc820 2 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1dffb220017ff2ffd13661e6fcbe6f5443d525e53a8bc820 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9kW 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9kW 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9kW 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce7f92cdd415da426fbfd3d0db6c6c9d 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hcO 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce7f92cdd415da426fbfd3d0db6c6c9d 1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce7f92cdd415da426fbfd3d0db6c6c9d 1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce7f92cdd415da426fbfd3d0db6c6c9d 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hcO 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hcO 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.hcO 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e274a78f5debcaec0f54e85868487e44 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7jE 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e274a78f5debcaec0f54e85868487e44 1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e274a78f5debcaec0f54e85868487e44 1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e274a78f5debcaec0f54e85868487e44 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7jE 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7jE 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.7jE 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8cc1ca2e0740984bbc9bf0e56f7e4c9b85f11660c289e697 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HYZ 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8cc1ca2e0740984bbc9bf0e56f7e4c9b85f11660c289e697 2 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8cc1ca2e0740984bbc9bf0e56f7e4c9b85f11660c289e697 2 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8cc1ca2e0740984bbc9bf0e56f7e4c9b85f11660c289e697 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:43.932 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HYZ 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HYZ 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.HYZ 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6810a2490e43b4303a270494b091f66b 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.e1Y 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6810a2490e43b4303a270494b091f66b 0 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6810a2490e43b4303a270494b091f66b 0 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6810a2490e43b4303a270494b091f66b 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.e1Y 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.e1Y 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.e1Y 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=18d1e84de1927d0c6899b2b9cbc0fdc8876782357833b0a5b889b30180cc023b 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.KrP 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 18d1e84de1927d0c6899b2b9cbc0fdc8876782357833b0a5b889b30180cc023b 3 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 18d1e84de1927d0c6899b2b9cbc0fdc8876782357833b0a5b889b30180cc023b 3 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=18d1e84de1927d0c6899b2b9cbc0fdc8876782357833b0a5b889b30180cc023b 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.KrP 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.KrP 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KrP 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 48991 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@827 -- # '[' -z 48991 ']' 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.191 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:44.192 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@860 -- # return 0 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ikd 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.qX5 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qX5 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FoI 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9kW ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9kW 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.hcO 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.7jE ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.7jE 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.HYZ 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.e1Y ]] 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.e1Y 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KrP 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:44.451 01:29:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:47.740 Waiting for block devices as requested 00:26:47.740 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:47.740 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:47.740 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:47.740 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:47.740 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:47.999 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:47.999 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:47.999 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:47.999 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:48.257 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:48.257 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:48.257 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:48.516 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:48.516 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:48.516 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:48.775 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:48.775 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:49.342 01:29:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:49.601 No valid GPT data, bailing 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:26:49.601 00:26:49.601 Discovery Log Number of Records 2, Generation counter 2 00:26:49.601 =====Discovery Log Entry 0====== 00:26:49.601 trtype: tcp 00:26:49.601 adrfam: ipv4 00:26:49.601 subtype: current discovery subsystem 00:26:49.601 treq: not specified, sq flow control disable supported 00:26:49.601 portid: 1 00:26:49.601 trsvcid: 4420 00:26:49.601 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:49.601 traddr: 10.0.0.1 00:26:49.601 eflags: none 00:26:49.601 sectype: none 00:26:49.601 =====Discovery Log Entry 1====== 00:26:49.601 trtype: tcp 00:26:49.601 adrfam: ipv4 00:26:49.601 subtype: nvme subsystem 00:26:49.601 treq: not specified, sq flow control disable supported 00:26:49.601 portid: 1 00:26:49.601 trsvcid: 4420 00:26:49.601 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:49.601 traddr: 10.0.0.1 00:26:49.601 eflags: none 00:26:49.601 sectype: none 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.601 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:49.602 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.602 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.860 nvme0n1 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.860 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.861 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.119 nvme0n1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.119 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.379 nvme0n1 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.379 01:29:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.379 nvme0n1 00:26:50.379 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.379 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.379 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.379 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.379 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.379 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:50.640 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.641 nvme0n1 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:50.641 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.934 nvme0n1 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.934 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 nvme0n1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.193 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.452 nvme0n1 00:26:51.452 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.452 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.452 01:29:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.452 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.452 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.452 01:29:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.452 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.711 nvme0n1 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.711 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 nvme0n1 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:51.969 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 nvme0n1 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.228 01:29:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.488 nvme0n1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.488 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.747 nvme0n1 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.747 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.006 nvme0n1 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.006 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.265 01:29:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.524 nvme0n1 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.524 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.525 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.784 nvme0n1 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.784 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.352 nvme0n1 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:54.352 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.353 01:29:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.612 nvme0n1 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:54.612 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.613 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.872 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.131 nvme0n1 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.131 01:29:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.699 nvme0n1 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.699 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 nvme0n1 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:55.958 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.959 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.218 01:29:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.785 nvme0n1 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:56.785 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:56.786 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.353 nvme0n1 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.353 01:29:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.921 nvme0n1 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:57.921 01:29:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.488 nvme0n1 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.488 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.747 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.748 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.315 nvme0n1 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.315 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.316 nvme0n1 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.316 01:29:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.575 nvme0n1 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.575 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.834 nvme0n1 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.834 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.835 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.094 nvme0n1 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.094 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.353 nvme0n1 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:00.353 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.354 01:29:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.613 nvme0n1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.613 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.898 nvme0n1 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:00.898 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.158 nvme0n1 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.158 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.417 nvme0n1 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.417 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.418 01:29:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.418 01:29:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:01.418 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.418 01:29:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 nvme0n1 00:27:01.418 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.676 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 nvme0n1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.936 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.195 nvme0n1 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.195 01:29:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.454 nvme0n1 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.454 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.714 nvme0n1 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.714 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.974 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.251 nvme0n1 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:03.251 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.252 01:29:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.526 nvme0n1 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:03.526 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.785 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:03.785 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.785 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:03.786 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.045 nvme0n1 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.045 01:29:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.613 nvme0n1 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.613 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.614 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.873 nvme0n1 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.873 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.440 nvme0n1 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.440 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.441 01:29:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.008 nvme0n1 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.008 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.009 01:29:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.577 nvme0n1 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:06.577 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.145 nvme0n1 00:27:07.145 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.145 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.145 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.145 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.145 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.145 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.404 01:29:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.980 nvme0n1 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.980 01:29:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.547 nvme0n1 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.547 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.548 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.806 nvme0n1 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.806 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.807 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.065 nvme0n1 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:09.065 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.066 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.324 nvme0n1 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.324 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 nvme0n1 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.325 01:29:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.325 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.584 nvme0n1 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.584 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.585 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.843 nvme0n1 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:09.843 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:09.844 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.103 nvme0n1 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.103 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.362 nvme0n1 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.362 01:29:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.362 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.621 nvme0n1 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.621 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.622 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:10.622 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.622 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.880 nvme0n1 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.880 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:10.881 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 nvme0n1 00:27:11.140 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.140 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.140 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.140 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.140 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.140 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.399 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.400 01:29:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.660 nvme0n1 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.660 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 nvme0n1 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.919 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.920 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:11.920 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.178 nvme0n1 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.178 01:29:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.179 01:29:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:12.179 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.179 01:29:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 nvme0n1 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.437 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.696 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.955 nvme0n1 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:12.955 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:12.956 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.523 nvme0n1 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.523 01:29:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.523 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.524 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.782 nvme0n1 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.782 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.041 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.300 nvme0n1 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.300 01:29:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.869 nvme0n1 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmMyY2M5YTA0NDMxYzg1ZjFlNGExZDA2NDcyYjM0ODW99NbP: 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmQ3NjkxOGFlYzcyZTVkODUxZTcwODdiOWZjZTI4ZmJiZjBkZDBlNmMyNGI2ODE0ZTMyMjVkOWU5MTdmYzMwOXeGYP8=: 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.869 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.437 nvme0n1 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.437 01:29:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.437 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.008 nvme0n1 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2U3ZjkyY2RkNDE1ZGE0MjZmYmZkM2QwZGI2YzZjOWQT+6eP: 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZTI3NGE3OGY1ZGViY2FlYzBmNTRlODU4Njg0ODdlNDSGZpar: 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.008 01:29:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.633 nvme0n1 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNjMWNhMmUwNzQwOTg0YmJjOWJmMGU1NmY3ZTRjOWI4NWYxMTY2MGMyODllNjk3jhyVcQ==: 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: ]] 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjgxMGEyNDkwZTQzYjQzMDNhMjcwNDk0YjA5MWY2NmIo92Ko: 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.633 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.634 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.202 nvme0n1 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThkMWU4NGRlMTkyN2QwYzY4OTliMmI5Y2JjMGZkYzg4NzY3ODIzNTc4MzNiMGE1Yjg4OWIzMDE4MGNjMDIzYtml/es=: 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.202 01:29:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.462 01:29:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:17.462 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.462 01:29:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.028 nvme0n1 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzQ3MDJiZTFhOTYxMjE0NzFkY2Q5NGFlMmEwYjE0ZDY5MjhiYTY3ZWZjMmNjNTZmwFFq1Q==: 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWRmZmIyMjAwMTdmZjJmZmQxMzY2MWU2ZmNiZTZmNTQ0M2Q1MjVlNTNhOGJjODIw5mI8Aw==: 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.028 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 request: 00:27:18.029 { 00:27:18.029 "name": "nvme0", 00:27:18.029 "trtype": "tcp", 00:27:18.029 "traddr": "10.0.0.1", 00:27:18.029 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:18.029 "adrfam": "ipv4", 00:27:18.029 "trsvcid": "4420", 00:27:18.029 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:18.029 "method": "bdev_nvme_attach_controller", 00:27:18.029 "req_id": 1 00:27:18.029 } 00:27:18.029 Got JSON-RPC error response 00:27:18.029 response: 00:27:18.029 { 00:27:18.029 "code": -32602, 00:27:18.029 "message": "Invalid parameters" 00:27:18.029 } 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 request: 00:27:18.029 { 00:27:18.029 "name": "nvme0", 00:27:18.029 "trtype": "tcp", 00:27:18.029 "traddr": "10.0.0.1", 00:27:18.029 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:18.029 "adrfam": "ipv4", 00:27:18.029 "trsvcid": "4420", 00:27:18.029 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:18.029 "dhchap_key": "key2", 00:27:18.029 "method": "bdev_nvme_attach_controller", 00:27:18.029 "req_id": 1 00:27:18.029 } 00:27:18.029 Got JSON-RPC error response 00:27:18.029 response: 00:27:18.029 { 00:27:18.029 "code": -32602, 00:27:18.029 "message": "Invalid parameters" 00:27:18.029 } 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.029 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.288 request: 00:27:18.288 { 00:27:18.288 "name": "nvme0", 00:27:18.288 "trtype": "tcp", 00:27:18.288 "traddr": "10.0.0.1", 00:27:18.288 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:18.288 "adrfam": "ipv4", 00:27:18.288 "trsvcid": "4420", 00:27:18.288 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:18.288 "dhchap_key": "key1", 00:27:18.288 "dhchap_ctrlr_key": "ckey2", 00:27:18.288 "method": "bdev_nvme_attach_controller", 00:27:18.288 "req_id": 1 00:27:18.288 } 00:27:18.288 Got JSON-RPC error response 00:27:18.288 response: 00:27:18.288 { 00:27:18.288 "code": -32602, 00:27:18.288 "message": "Invalid parameters" 00:27:18.288 } 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.288 rmmod nvme_tcp 00:27:18.288 rmmod nvme_fabrics 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 48991 ']' 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 48991 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@946 -- # '[' -z 48991 ']' 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@950 -- # kill -0 48991 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # uname 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 48991 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 48991' 00:27:18.288 killing process with pid 48991 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # kill 48991 00:27:18.288 01:29:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@970 -- # wait 48991 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.547 01:29:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:21.082 01:29:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:24.371 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:24.371 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:25.749 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:25.749 01:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ikd /tmp/spdk.key-null.FoI /tmp/spdk.key-sha256.hcO /tmp/spdk.key-sha384.HYZ /tmp/spdk.key-sha512.KrP /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:25.749 01:30:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:29.035 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:29.035 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:29.035 00:27:29.035 real 0m52.511s 00:27:29.035 user 0m45.363s 00:27:29.035 sys 0m14.266s 00:27:29.035 01:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:29.035 01:30:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.035 ************************************ 00:27:29.035 END TEST nvmf_auth_host 00:27:29.035 ************************************ 00:27:29.035 01:30:04 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:27:29.035 01:30:04 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:29.035 01:30:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:27:29.035 01:30:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:29.035 01:30:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:29.035 ************************************ 00:27:29.035 START TEST nvmf_digest 00:27:29.035 ************************************ 00:27:29.035 01:30:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:29.294 * Looking for test storage... 00:27:29.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.294 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:29.295 01:30:04 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:35.888 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.888 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:35.888 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:35.889 Found net devices under 0000:af:00.0: cvl_0_0 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:35.889 Found net devices under 0000:af:00.1: cvl_0_1 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.889 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:36.148 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:36.148 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:36.148 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:36.148 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:36.148 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:36.148 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:36.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:36.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:27:36.408 00:27:36.408 --- 10.0.0.2 ping statistics --- 00:27:36.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.408 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:36.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:36.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:27:36.408 00:27:36.408 --- 10.0.0.1 ping statistics --- 00:27:36.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:36.408 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:36.408 ************************************ 00:27:36.408 START TEST nvmf_digest_clean 00:27:36.408 ************************************ 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=63473 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 63473 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 63473 ']' 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:36.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:36.408 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:36.409 01:30:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:36.409 [2024-05-15 01:30:11.995421] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:36.409 [2024-05-15 01:30:11.995467] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.409 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.409 [2024-05-15 01:30:12.070129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.667 [2024-05-15 01:30:12.138386] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.667 [2024-05-15 01:30:12.138426] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.667 [2024-05-15 01:30:12.138436] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.667 [2024-05-15 01:30:12.138444] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.667 [2024-05-15 01:30:12.138450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.667 [2024-05-15 01:30:12.138471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.236 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:37.236 null0 00:27:37.236 [2024-05-15 01:30:12.918271] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:37.521 [2024-05-15 01:30:12.942266] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:37.521 [2024-05-15 01:30:12.942475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=63703 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 63703 /var/tmp/bperf.sock 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 63703 ']' 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:37.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:37.521 01:30:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:37.521 [2024-05-15 01:30:12.992184] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:37.521 [2024-05-15 01:30:12.992236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63703 ] 00:27:37.521 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.521 [2024-05-15 01:30:13.060754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.521 [2024-05-15 01:30:13.130771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.102 01:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:38.102 01:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:38.102 01:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:38.103 01:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:38.103 01:30:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:38.361 01:30:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.361 01:30:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:38.620 nvme0n1 00:27:38.620 01:30:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:38.620 01:30:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:38.879 Running I/O for 2 seconds... 00:27:40.784 00:27:40.784 Latency(us) 00:27:40.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.784 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:40.784 nvme0n1 : 2.00 28283.05 110.48 0.00 0.00 4520.75 2372.40 14050.92 00:27:40.784 =================================================================================================================== 00:27:40.784 Total : 28283.05 110.48 0.00 0.00 4520.75 2372.40 14050.92 00:27:40.784 0 00:27:40.784 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:40.784 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:40.784 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:40.784 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:40.784 | select(.opcode=="crc32c") 00:27:40.784 | "\(.module_name) \(.executed)"' 00:27:40.784 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 63703 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 63703 ']' 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 63703 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63703 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63703' 00:27:41.043 killing process with pid 63703 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 63703 00:27:41.043 Received shutdown signal, test time was about 2.000000 seconds 00:27:41.043 00:27:41.043 Latency(us) 00:27:41.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.043 =================================================================================================================== 00:27:41.043 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:41.043 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 63703 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=64300 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 64300 /var/tmp/bperf.sock 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 64300 ']' 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:41.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:41.302 01:30:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:41.302 [2024-05-15 01:30:16.851042] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:41.302 [2024-05-15 01:30:16.851095] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64300 ] 00:27:41.302 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:41.302 Zero copy mechanism will not be used. 00:27:41.302 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.302 [2024-05-15 01:30:16.919632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.302 [2024-05-15 01:30:16.992141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.239 01:30:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:42.498 nvme0n1 00:27:42.498 01:30:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:42.498 01:30:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:42.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:42.756 Zero copy mechanism will not be used. 00:27:42.756 Running I/O for 2 seconds... 00:27:44.661 00:27:44.661 Latency(us) 00:27:44.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.661 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:44.661 nvme0n1 : 2.00 2842.54 355.32 0.00 0.00 5625.26 1952.97 20027.80 00:27:44.661 =================================================================================================================== 00:27:44.661 Total : 2842.54 355.32 0.00 0.00 5625.26 1952.97 20027.80 00:27:44.661 0 00:27:44.661 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:44.661 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:44.661 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:44.661 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:44.661 | select(.opcode=="crc32c") 00:27:44.661 | "\(.module_name) \(.executed)"' 00:27:44.661 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:44.920 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:44.920 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:44.920 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:44.920 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:44.920 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 64300 00:27:44.920 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 64300 ']' 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 64300 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64300 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64300' 00:27:44.921 killing process with pid 64300 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 64300 00:27:44.921 Received shutdown signal, test time was about 2.000000 seconds 00:27:44.921 00:27:44.921 Latency(us) 00:27:44.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:44.921 =================================================================================================================== 00:27:44.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:44.921 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 64300 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=64913 00:27:45.180 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 64913 /var/tmp/bperf.sock 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 64913 ']' 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:45.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:45.181 01:30:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:45.181 [2024-05-15 01:30:20.733990] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:45.181 [2024-05-15 01:30:20.734043] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64913 ] 00:27:45.181 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.181 [2024-05-15 01:30:20.804254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.440 [2024-05-15 01:30:20.879736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.009 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:46.009 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:46.009 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:46.009 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:46.009 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:46.268 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.268 01:30:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:46.528 nvme0n1 00:27:46.528 01:30:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:46.528 01:30:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:46.528 Running I/O for 2 seconds... 00:27:48.433 00:27:48.433 Latency(us) 00:27:48.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.433 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:48.433 nvme0n1 : 2.00 28035.85 109.52 0.00 0.00 4557.52 2988.44 20866.66 00:27:48.433 =================================================================================================================== 00:27:48.433 Total : 28035.85 109.52 0.00 0.00 4557.52 2988.44 20866.66 00:27:48.433 0 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:48.692 | select(.opcode=="crc32c") 00:27:48.692 | "\(.module_name) \(.executed)"' 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 64913 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 64913 ']' 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 64913 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64913 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64913' 00:27:48.692 killing process with pid 64913 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 64913 00:27:48.692 Received shutdown signal, test time was about 2.000000 seconds 00:27:48.692 00:27:48.692 Latency(us) 00:27:48.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.692 =================================================================================================================== 00:27:48.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.692 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 64913 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=65652 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 65652 /var/tmp/bperf.sock 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 65652 ']' 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:48.950 01:30:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:48.950 [2024-05-15 01:30:24.615652] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:48.950 [2024-05-15 01:30:24.615706] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65652 ] 00:27:48.950 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:48.950 Zero copy mechanism will not be used. 00:27:49.208 EAL: No free 2048 kB hugepages reported on node 1 00:27:49.208 [2024-05-15 01:30:24.685735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.208 [2024-05-15 01:30:24.760240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.774 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:49.774 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:27:49.774 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:49.774 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:49.774 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:50.033 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.033 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:50.293 nvme0n1 00:27:50.293 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:50.293 01:30:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:50.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:50.293 Zero copy mechanism will not be used. 00:27:50.293 Running I/O for 2 seconds... 00:27:52.831 00:27:52.831 Latency(us) 00:27:52.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.831 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:52.831 nvme0n1 : 2.01 2275.92 284.49 0.00 0.00 7014.97 5242.88 30618.42 00:27:52.831 =================================================================================================================== 00:27:52.831 Total : 2275.92 284.49 0.00 0.00 7014.97 5242.88 30618.42 00:27:52.831 0 00:27:52.831 01:30:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:52.831 01:30:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:52.831 01:30:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:52.831 01:30:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:52.831 | select(.opcode=="crc32c") 00:27:52.831 | "\(.module_name) \(.executed)"' 00:27:52.831 01:30:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 65652 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 65652 ']' 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 65652 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65652 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65652' 00:27:52.831 killing process with pid 65652 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 65652 00:27:52.831 Received shutdown signal, test time was about 2.000000 seconds 00:27:52.831 00:27:52.831 Latency(us) 00:27:52.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:52.831 =================================================================================================================== 00:27:52.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 65652 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 63473 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 63473 ']' 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 63473 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:27:52.831 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:52.832 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63473 00:27:52.832 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:52.832 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:52.832 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63473' 00:27:52.832 killing process with pid 63473 00:27:52.832 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 63473 00:27:52.832 [2024-05-15 01:30:28.464892] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:52.832 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 63473 00:27:53.091 00:27:53.091 real 0m16.735s 00:27:53.091 user 0m31.849s 00:27:53.091 sys 0m4.547s 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:53.091 ************************************ 00:27:53.091 END TEST nvmf_digest_clean 00:27:53.091 ************************************ 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:53.091 ************************************ 00:27:53.091 START TEST nvmf_digest_error 00:27:53.091 ************************************ 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=66318 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 66318 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 66318 ']' 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:53.091 01:30:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:53.389 [2024-05-15 01:30:28.817930] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:53.389 [2024-05-15 01:30:28.817970] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.389 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.389 [2024-05-15 01:30:28.892125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.389 [2024-05-15 01:30:28.960698] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.389 [2024-05-15 01:30:28.960739] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.389 [2024-05-15 01:30:28.960748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.389 [2024-05-15 01:30:28.960757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.389 [2024-05-15 01:30:28.960764] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.389 [2024-05-15 01:30:28.960787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.958 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:53.958 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:53.958 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.958 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.958 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.218 [2024-05-15 01:30:29.658826] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.218 null0 00:27:54.218 [2024-05-15 01:30:29.750982] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.218 [2024-05-15 01:30:29.774974] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:54.218 [2024-05-15 01:30:29.775209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=66511 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 66511 /var/tmp/bperf.sock 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 66511 ']' 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:54.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:54.218 01:30:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:54.218 [2024-05-15 01:30:29.826110] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:54.218 [2024-05-15 01:30:29.826157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66511 ] 00:27:54.218 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.218 [2024-05-15 01:30:29.894846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:54.477 [2024-05-15 01:30:29.970990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:55.046 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:55.046 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:55.046 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.046 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:55.305 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:55.305 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.305 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.305 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.305 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.305 01:30:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:55.565 nvme0n1 00:27:55.565 01:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:55.565 01:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.565 01:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:55.565 01:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.565 01:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:55.565 01:30:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:55.825 Running I/O for 2 seconds... 00:27:55.825 [2024-05-15 01:30:31.294197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.294231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.294243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.304492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.304519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.304531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.311986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.312009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.312020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.321760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.321782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.321793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.330107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.330129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.330141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.340281] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.340304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.340315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.347798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.347821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.347831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.358019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.358040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.358051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.366095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.366117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.366128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.375671] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.375693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.375704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.384696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.384718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.394461] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.394483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.394494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.402346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.825 [2024-05-15 01:30:31.402367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.825 [2024-05-15 01:30:31.402378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.825 [2024-05-15 01:30:31.411795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.411817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.411831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.421058] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.421079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.421090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.429731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.429752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.429762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.438602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.438623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.438634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.446899] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.446921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.446931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.457511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.457533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.457544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.465639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.465661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.465672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.474798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.474819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.474830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.483880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.483901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.483912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.491890] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.491914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.491925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.501427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.501449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.501459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:55.826 [2024-05-15 01:30:31.510003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:55.826 [2024-05-15 01:30:31.510024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:50 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.826 [2024-05-15 01:30:31.510035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.086 [2024-05-15 01:30:31.519158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.086 [2024-05-15 01:30:31.519180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.519196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.529413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.529436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.529446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.536874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.536896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.536906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.546541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.546562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.546573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.556490] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.556512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.556522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.564406] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.564428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.564438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.573847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.573869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.573879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.581916] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.581937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.581948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.590576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.590597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.590608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.599980] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.600001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.600011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.608521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.608542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.608553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.617847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.617868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.617879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.627087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.627109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.627119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.635866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.635887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.635898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.644237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.644258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.644272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.653035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.653056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.653067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.661806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.661827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.661838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.671080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.671101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.671111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.679751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.679773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.679783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.688297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.688318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.688329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.697690] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.697713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.697724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.706674] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.706696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.706706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.715641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.715663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.715673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.724015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.724040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.724051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.732865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.732887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.732898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.742269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.742291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.742302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.750775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.750796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.750807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.760002] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.760025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.087 [2024-05-15 01:30:31.760036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.087 [2024-05-15 01:30:31.768492] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.087 [2024-05-15 01:30:31.768517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.088 [2024-05-15 01:30:31.768528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.777738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.777762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.777773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.787201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.787223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.787234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.795719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.795741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.795755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.805225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.805248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.805259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.813368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.813391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.813402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.822576] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.822599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.822610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.831575] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.831597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.831608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.841030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.841052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.841063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.848902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.848924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.848935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.858448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.858469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.858479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.867061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.867083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.867094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.876188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.876221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.876232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.883933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.883955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.883966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.893823] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.893846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.893857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.903573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.903594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.903604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.911560] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.911582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.911593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.920032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.920053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.920064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.929600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.929622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.929632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.939137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.348 [2024-05-15 01:30:31.939159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.348 [2024-05-15 01:30:31.939170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.348 [2024-05-15 01:30:31.947500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:31.947522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:31.947532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:31.957734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:31.957756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:31.957767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:31.965822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:31.965844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:31.965855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:31.975683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:31.975705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:31.975716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:31.984408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:31.984431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:31.984442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:31.992896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:31.992919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:31.992930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:32.001366] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:32.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:32.001400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:32.011099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:32.011121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:32.011132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:32.020089] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:32.020111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:32.020121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:32.028541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:32.028563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:32.028578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.349 [2024-05-15 01:30:32.038378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.349 [2024-05-15 01:30:32.038401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.349 [2024-05-15 01:30:32.038413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.047036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.047058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.047068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.055886] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.055908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.055919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.064368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.064389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.064400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.073617] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.073638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.073649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.083159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.083181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.083199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.091047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.091069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.091080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.100965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.100988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.100999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.109545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.109570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.109580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.118092] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.609 [2024-05-15 01:30:32.118114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.609 [2024-05-15 01:30:32.118125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.609 [2024-05-15 01:30:32.126753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.126775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.126786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.136107] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.136128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.136138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.145064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.145086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.145097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.154529] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.154552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.154564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.162269] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.162291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.162302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.171746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.171768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.171778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.181049] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.181070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.181081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.188912] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.188933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.188945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.198978] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.199000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.199010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.206955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.206976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.206987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.215929] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.215951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.215962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.225259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.225280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.225291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.234324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.234346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.234357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.242864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.242886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.242897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.251022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.251043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.251054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.260751] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.260775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.260786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.269019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.269040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.269051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.277683] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.277704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.277715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.287303] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.287324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.287335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.610 [2024-05-15 01:30:32.295975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.610 [2024-05-15 01:30:32.295997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.610 [2024-05-15 01:30:32.296008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.305891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.305912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.305923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.314166] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.314187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.314203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.323361] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.323382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.323393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.331494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.331515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.331525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.340323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.340344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.340354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.349897] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.349919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.349930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.359133] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.359156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.359166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.367327] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.367349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.367359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.377658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.377680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.377691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.385033] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.385054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.385065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.395926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.395948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.395958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.404287] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.404309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.404320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.415533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.415554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.415567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.424288] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.424309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.424320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.433196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.433218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.433229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.441526] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.441548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.441559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.451016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.451048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.459120] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.459141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.459151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.468256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.468277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.468288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.476771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.476792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.476803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.486531] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.486553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.486564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.494581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.494607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.494617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.503907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.503929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.503939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.517632] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.517653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.517663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.526315] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.526336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.526347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.535212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.871 [2024-05-15 01:30:32.535233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.871 [2024-05-15 01:30:32.535244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.871 [2024-05-15 01:30:32.544121] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.872 [2024-05-15 01:30:32.544143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.872 [2024-05-15 01:30:32.544154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:56.872 [2024-05-15 01:30:32.554035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:56.872 [2024-05-15 01:30:32.554057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:56.872 [2024-05-15 01:30:32.554068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.563651] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.563673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.563683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.572736] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.572757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.572768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.581716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.581737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.581747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.593907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.593929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.593939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.605291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.605312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.605323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.613647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.613667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.613678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.623422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.623443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.623454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.631788] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.631820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.641885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.641907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.641917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.653334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.653356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.653367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.663018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.663039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.663053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.672990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.673011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.673022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.680953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.680975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.680985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.692197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.692219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.692229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.700452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.700473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.700483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.709878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.709899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.709910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.719939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.719961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.719971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.730486] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.730507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.730518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.739454] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.739475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.739485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.748446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.748467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.748478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.756692] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.756713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.756723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.766919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.766940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.766951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.774636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.774657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.774667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.785270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.132 [2024-05-15 01:30:32.785291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.132 [2024-05-15 01:30:32.785302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.132 [2024-05-15 01:30:32.797225] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.133 [2024-05-15 01:30:32.797246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.133 [2024-05-15 01:30:32.797257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.133 [2024-05-15 01:30:32.806700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.133 [2024-05-15 01:30:32.806723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.133 [2024-05-15 01:30:32.806733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.133 [2024-05-15 01:30:32.815372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.133 [2024-05-15 01:30:32.815394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.133 [2024-05-15 01:30:32.815404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.392 [2024-05-15 01:30:32.825003] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.392 [2024-05-15 01:30:32.825025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-05-15 01:30:32.825038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.392 [2024-05-15 01:30:32.834034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.392 [2024-05-15 01:30:32.834055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-05-15 01:30:32.834066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.392 [2024-05-15 01:30:32.844501] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.392 [2024-05-15 01:30:32.844523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-05-15 01:30:32.844534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.392 [2024-05-15 01:30:32.852386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.392 [2024-05-15 01:30:32.852407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-05-15 01:30:32.852418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.392 [2024-05-15 01:30:32.861953] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.392 [2024-05-15 01:30:32.861974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-05-15 01:30:32.861985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.392 [2024-05-15 01:30:32.871961] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.392 [2024-05-15 01:30:32.871982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.392 [2024-05-15 01:30:32.871993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.880431] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.880451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.880462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.889293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.889314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.889325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.903561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.903584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.903595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.912413] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.912438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.912449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.922060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.922081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.922092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.929836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.929857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.929868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.940310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.940331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.940342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.948537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.948558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.948568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.959488] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.959509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.959520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.969621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.969642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.969652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.978777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.978800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.978810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.988375] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.988397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.988408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:32.996402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:32.996423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:32.996434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.005822] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.005846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.005859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.013516] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.013537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.013548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.023942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.023964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.023975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.035521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.035543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.035553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.043971] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.043993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.044004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.054420] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.054442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.054453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.063070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.063091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.063102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.072948] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.072970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.072984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.393 [2024-05-15 01:30:33.081477] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.393 [2024-05-15 01:30:33.081499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.393 [2024-05-15 01:30:33.081510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.096925] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.096948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.096959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.105106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.105127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.105137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.114215] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.114237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.114247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.123946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.123967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.123978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.132205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.132226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.132237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.142557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.142579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.142590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.151128] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.151149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.151160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.160195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.652 [2024-05-15 01:30:33.160220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.652 [2024-05-15 01:30:33.160230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.652 [2024-05-15 01:30:33.169867] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.169889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.169899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.178874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.178897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.178908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.186750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.186772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.186782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.196460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.196482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.196492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.205176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.205203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.205214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.218008] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.218030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.218041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.226208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.226229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.226240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.234821] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.234843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.234854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.246078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.246099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.246110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.254176] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.254202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.254214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.264179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.264204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.264215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 [2024-05-15 01:30:33.271935] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10a2bb0) 00:27:57.653 [2024-05-15 01:30:33.271956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:57.653 [2024-05-15 01:30:33.271966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:57.653 00:27:57.653 Latency(us) 00:27:57.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.653 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:57.653 nvme0n1 : 2.00 27555.09 107.64 0.00 0.00 4640.35 2202.01 21915.24 00:27:57.653 =================================================================================================================== 00:27:57.653 Total : 27555.09 107.64 0.00 0.00 4640.35 2202.01 21915.24 00:27:57.653 0 00:27:57.653 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:57.653 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:57.653 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:57.653 | .driver_specific 00:27:57.653 | .nvme_error 00:27:57.653 | .status_code 00:27:57.653 | .command_transient_transport_error' 00:27:57.653 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 66511 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 66511 ']' 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 66511 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66511 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66511' 00:27:57.912 killing process with pid 66511 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 66511 00:27:57.912 Received shutdown signal, test time was about 2.000000 seconds 00:27:57.912 00:27:57.912 Latency(us) 00:27:57.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:57.912 =================================================================================================================== 00:27:57.912 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:57.912 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 66511 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=67243 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 67243 /var/tmp/bperf.sock 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 67243 ']' 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:58.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:58.171 01:30:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:58.171 [2024-05-15 01:30:33.792219] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:27:58.171 [2024-05-15 01:30:33.792273] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67243 ] 00:27:58.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:58.171 Zero copy mechanism will not be used. 00:27:58.171 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.171 [2024-05-15 01:30:33.859294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.430 [2024-05-15 01:30:33.929403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:58.997 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:58.997 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:27:58.997 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:58.997 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:59.255 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:59.255 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.255 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:59.255 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.255 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.255 01:30:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:59.513 nvme0n1 00:27:59.513 01:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:59.513 01:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.513 01:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:59.513 01:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.513 01:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:59.513 01:30:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:59.772 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:59.772 Zero copy mechanism will not be used. 00:27:59.772 Running I/O for 2 seconds... 00:27:59.772 [2024-05-15 01:30:35.275542] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.772 [2024-05-15 01:30:35.275575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.772 [2024-05-15 01:30:35.275587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:59.772 [2024-05-15 01:30:35.287928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.287953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.287965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.298470] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.298493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.298505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.308965] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.308987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.308998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.320186] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.320214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.320225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.331118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.331144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.331156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.341887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.341909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.341920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.352919] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.352942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.352952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.365175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.365203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.365214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.378243] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.378266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.378276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.389856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.389877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.389888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.401119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.401140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.401151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.412346] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.412368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.412378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.423387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.423409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.423419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.435271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.435293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.435304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.446404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.446426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.446437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.773 [2024-05-15 01:30:35.457188] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:27:59.773 [2024-05-15 01:30:35.457216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.773 [2024-05-15 01:30:35.457226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.468884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.468906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.468916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.481285] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.481306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.481316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.493271] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.493293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.493303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.504507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.504529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.504540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.517104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.517127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.517138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.530405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.530426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.530440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.542510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.542532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.542542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.565071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.565092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.565102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.580405] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.580428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.580438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.592445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.592466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.592476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.604306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.604327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.604336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.622659] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.622680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.032 [2024-05-15 01:30:35.635950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.032 [2024-05-15 01:30:35.635970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.032 [2024-05-15 01:30:35.635980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.033 [2024-05-15 01:30:35.649716] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.033 [2024-05-15 01:30:35.649738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.033 [2024-05-15 01:30:35.649749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.033 [2024-05-15 01:30:35.662968] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.033 [2024-05-15 01:30:35.662989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.033 [2024-05-15 01:30:35.663000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.033 [2024-05-15 01:30:35.676618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.033 [2024-05-15 01:30:35.676640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.033 [2024-05-15 01:30:35.676651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.033 [2024-05-15 01:30:35.696808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.033 [2024-05-15 01:30:35.696829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.033 [2024-05-15 01:30:35.696839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.033 [2024-05-15 01:30:35.711475] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.033 [2024-05-15 01:30:35.711497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.033 [2024-05-15 01:30:35.711507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.733597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.733619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.733629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.748934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.748956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.748966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.759955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.759977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.759986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.770679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.770700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.770710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.783168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.783196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.795514] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.795535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.795545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.806858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.806881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.817757] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.817779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.292 [2024-05-15 01:30:35.817790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.292 [2024-05-15 01:30:35.828792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.292 [2024-05-15 01:30:35.828814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.828824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.840306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.840327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.840338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.851750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.851771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.851781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.862908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.862930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.862941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.874275] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.874297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.874307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.887966] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.887992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.888003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.898734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.898756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.898766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.910820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.910841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.910852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.922996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.923016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.923026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.933851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.933872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.933883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.947141] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.947163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.947173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.956127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.956149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.956159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.967153] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.967175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.967185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.293 [2024-05-15 01:30:35.978634] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.293 [2024-05-15 01:30:35.978657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.293 [2024-05-15 01:30:35.978668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.552 [2024-05-15 01:30:35.990629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.552 [2024-05-15 01:30:35.990651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.552 [2024-05-15 01:30:35.990662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.552 [2024-05-15 01:30:36.003270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.552 [2024-05-15 01:30:36.003292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.552 [2024-05-15 01:30:36.003303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.552 [2024-05-15 01:30:36.016404] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.552 [2024-05-15 01:30:36.016426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.016437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.028143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.028164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.028174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.038883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.038904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.038914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.049697] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.049719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.049730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.060924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.060946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.060957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.073046] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.073068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.073079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.084152] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.084175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.084188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.095388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.095410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.095421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.106584] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.106606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.106616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.117986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.118008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.118019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.129947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.129968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.129978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.140972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.140992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.141003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.152813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.152835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.152846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.165469] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.165490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.165500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.176481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.176502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.176512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.187731] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.187752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.187762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.198459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.198480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.198491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.209354] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.209376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.209386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.221090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.221112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.221122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.553 [2024-05-15 01:30:36.232594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.553 [2024-05-15 01:30:36.232616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.553 [2024-05-15 01:30:36.232626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.244280] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.244303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.244314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.255253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.255275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.255286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.266539] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.266560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.266571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.277997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.278019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.278032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.290023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.290045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.290056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.303042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.303063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.303073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.313654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.313675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.313685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.324543] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.324564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.324574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.335432] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.335453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.335464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.346908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.346931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.346941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.359934] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.359956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.359967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.371299] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.371320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.371331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.383249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.383274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.383285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.396258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.396279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.396289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.408660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.408681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.408692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.419395] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.419416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.419426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.430251] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.430272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.430282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.441850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.441872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.441883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.453237] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.813 [2024-05-15 01:30:36.453258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.813 [2024-05-15 01:30:36.453268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:00.813 [2024-05-15 01:30:36.463908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.814 [2024-05-15 01:30:36.463929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.814 [2024-05-15 01:30:36.463940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:00.814 [2024-05-15 01:30:36.475030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.814 [2024-05-15 01:30:36.475051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.814 [2024-05-15 01:30:36.475062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.814 [2024-05-15 01:30:36.486070] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.814 [2024-05-15 01:30:36.486091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.814 [2024-05-15 01:30:36.486101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:00.814 [2024-05-15 01:30:36.497377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:00.814 [2024-05-15 01:30:36.497398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.814 [2024-05-15 01:30:36.497408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.508668] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.508689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.508700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.520741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.520762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.520773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.532892] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.532914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.532924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.544138] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.544160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.544171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.554969] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.554991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.555001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.565865] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.565886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.565897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.577763] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.577786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.577800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.590847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.590870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.590880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.604127] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.604150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.604160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.615635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.615657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.615668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.627086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.627109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.627119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.638298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.638320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.638331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.649527] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.649548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.649559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.660664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.660686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.660696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.671126] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.671148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.671158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.682647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.682669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.682680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.694123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.694145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.694155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.705030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.705052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.705062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.716552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.716574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.716584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.728600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.728622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.728633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.739382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.739403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.739413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.750518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.750540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.750550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.074 [2024-05-15 01:30:36.762151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.074 [2024-05-15 01:30:36.762174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.074 [2024-05-15 01:30:36.762184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.773704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.773726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.773740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.784857] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.784879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.784889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.797011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.797032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.797043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.808817] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.808839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.808849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.819698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.819719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.819729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.830494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.830516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.830526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.842016] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.842038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.842048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.854775] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.854797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.854807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.867724] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.867746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.867756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.879377] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.879402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.879413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.890246] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.890267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.333 [2024-05-15 01:30:36.890278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.333 [2024-05-15 01:30:36.901471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.333 [2024-05-15 01:30:36.901494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.901505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.912028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.912050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.912061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.923368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.923389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.923399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.936349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.936375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.936386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.948563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.948585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.948596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.959911] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.959933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.959944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.971888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.971910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.971921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.983839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.983862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.983872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:36.994812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:36.994836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:36.994848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:37.006756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:37.006778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:37.006789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.334 [2024-05-15 01:30:37.017984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.334 [2024-05-15 01:30:37.018007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.334 [2024-05-15 01:30:37.018017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.028888] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.028911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.028922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.040602] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.040625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.040635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.051663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.051685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.051696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.062996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.063018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.063028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.074068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.074096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.074107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.085798] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.085821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.085832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.097458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.097480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.097491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.108766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.108788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.108799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.122643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.122665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.122676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.131306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.131329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.131339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.142644] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.142667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.142678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.154623] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.154655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.174939] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.174961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.174971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.193457] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.193477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.193488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.214069] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.214089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.214099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.228417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.228437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.228447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:01.593 [2024-05-15 01:30:37.243863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x190aa00) 00:28:01.593 [2024-05-15 01:30:37.243883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:01.593 [2024-05-15 01:30:37.243893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:01.593 00:28:01.593 Latency(us) 00:28:01.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.593 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:01.593 nvme0n1 : 2.01 2540.78 317.60 0.00 0.00 6294.19 4639.95 25060.97 00:28:01.593 =================================================================================================================== 00:28:01.593 Total : 2540.78 317.60 0.00 0.00 6294.19 4639.95 25060.97 00:28:01.593 0 00:28:01.593 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:01.593 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:01.593 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:01.593 | .driver_specific 00:28:01.593 | .nvme_error 00:28:01.593 | .status_code 00:28:01.593 | .command_transient_transport_error' 00:28:01.593 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 67243 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 67243 ']' 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 67243 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67243 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67243' 00:28:01.853 killing process with pid 67243 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 67243 00:28:01.853 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.853 00:28:01.853 Latency(us) 00:28:01.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.853 =================================================================================================================== 00:28:01.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.853 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 67243 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=67879 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 67879 /var/tmp/bperf.sock 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 67879 ']' 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:02.112 01:30:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:02.112 [2024-05-15 01:30:37.770663] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:02.112 [2024-05-15 01:30:37.770715] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67879 ] 00:28:02.112 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.370 [2024-05-15 01:30:37.839321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.370 [2024-05-15 01:30:37.903220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.937 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:02.937 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:02.937 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:02.937 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:03.197 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:03.197 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.197 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:03.197 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.197 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.197 01:30:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:03.456 nvme0n1 00:28:03.456 01:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:03.456 01:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.456 01:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:03.456 01:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.456 01:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:03.456 01:30:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:03.715 Running I/O for 2 seconds... 00:28:03.715 [2024-05-15 01:30:39.226009] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190fd208 00:28:03.715 [2024-05-15 01:30:39.227132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.715 [2024-05-15 01:30:39.227163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.715 [2024-05-15 01:30:39.235897] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.715 [2024-05-15 01:30:39.236100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.715 [2024-05-15 01:30:39.236124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.715 [2024-05-15 01:30:39.245101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.715 [2024-05-15 01:30:39.245283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.715 [2024-05-15 01:30:39.245303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.715 [2024-05-15 01:30:39.254255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.715 [2024-05-15 01:30:39.254446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.715 [2024-05-15 01:30:39.254466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.263385] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.263588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.263610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.272532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.272734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.272755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.281661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.281864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.281885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.290781] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.290978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.291004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.299829] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.300025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.300051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.308955] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.309154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.309179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.318082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.318283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.318302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.327358] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.327559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.327585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.336501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.336726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.336747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.345620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.345810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.345830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.354719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.354912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.354934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.363825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.364023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.364050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.372938] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.373135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.373155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.382031] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.382249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.382270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.391203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.391404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.391424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.716 [2024-05-15 01:30:39.400287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.716 [2024-05-15 01:30:39.400501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.716 [2024-05-15 01:30:39.400521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.409706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.409906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.409926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.418815] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.419106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.419126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.427961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.428362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.428382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.437064] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.437272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.437301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.446150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.446463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.446483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.455214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.455545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.455564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.464353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.464517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.464536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.473449] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.473749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.473769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.482539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.482714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.482733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.491748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.491911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.491929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.500835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.501018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.501037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.509882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.510146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.510165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.519002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.519435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.519455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.528093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.528266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.528285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.537173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.537489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.537509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.546292] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.546476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.546495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.555431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.555842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.555861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.564513] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.564830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.564849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.573602] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.573774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.573792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.582717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.583152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.583171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.591825] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.592268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.592287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.976 [2024-05-15 01:30:39.600941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.976 [2024-05-15 01:30:39.601348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.976 [2024-05-15 01:30:39.601368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.610076] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.610293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.610312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.619214] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.619403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.619422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.628428] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.628612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.628631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.637490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.637673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.637691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.646612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.647014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.647034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.655614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.655871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.655891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:03.977 [2024-05-15 01:30:39.664833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:03.977 [2024-05-15 01:30:39.664999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:03.977 [2024-05-15 01:30:39.665018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.674105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:04.237 [2024-05-15 01:30:39.674336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.674358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.683236] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:04.237 [2024-05-15 01:30:39.683403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.683422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.692430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190feb58 00:28:04.237 [2024-05-15 01:30:39.694513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.694533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.704250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190ff3c8 00:28:04.237 [2024-05-15 01:30:39.705329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.705349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.713595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.713825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.713845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.722736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.722951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.722971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.731840] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.732069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.732089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.740986] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.741222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.741241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.750297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.750555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.750575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.759494] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.759727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.759749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.768617] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.768853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.768872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.777779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.778017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.778037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.787173] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.787433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.787453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.796396] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.796634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.796654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.805517] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.805750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.805769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.814644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.814872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.814891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.823817] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.824045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.824064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.832964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.833202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.833222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.842269] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.842508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.842528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.851392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.851626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.851646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.860502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.860736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.860755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.869666] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.869899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.869918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.878871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.879112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.879132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.888068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.888297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.237 [2024-05-15 01:30:39.888317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.237 [2024-05-15 01:30:39.897202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.237 [2024-05-15 01:30:39.897431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.238 [2024-05-15 01:30:39.897450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.238 [2024-05-15 01:30:39.906304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.238 [2024-05-15 01:30:39.906537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.238 [2024-05-15 01:30:39.906557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.238 [2024-05-15 01:30:39.915430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.238 [2024-05-15 01:30:39.915668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.238 [2024-05-15 01:30:39.915690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.238 [2024-05-15 01:30:39.924636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.238 [2024-05-15 01:30:39.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.238 [2024-05-15 01:30:39.924922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.934029] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.934263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.934282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.943202] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.943438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.943457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.952309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.952543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.952562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.961427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.961659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.961678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.970539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.970777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.970796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.979644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.979875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.979895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.988768] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.989003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.989023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:39.997887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:39.998129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:39.998148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.007276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.007516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.007537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.017843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.018083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.018105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.027242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.027490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.027510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.036603] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.036846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.036866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.046016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.046261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.046281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.055722] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.055974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.065141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.065396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.065417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.074534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.074773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.074794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.083909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.084147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.084168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.094103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.094372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.094396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.103685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.103938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.103959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.113119] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.113366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.113387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.122613] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.122848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.122868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.132032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.132270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.132291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.141384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.141619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.497 [2024-05-15 01:30:40.141638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.497 [2024-05-15 01:30:40.150692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.497 [2024-05-15 01:30:40.150946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.498 [2024-05-15 01:30:40.150966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.498 [2024-05-15 01:30:40.160084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.498 [2024-05-15 01:30:40.160345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.498 [2024-05-15 01:30:40.160369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.498 [2024-05-15 01:30:40.169433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.498 [2024-05-15 01:30:40.169691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.498 [2024-05-15 01:30:40.169711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.498 [2024-05-15 01:30:40.178763] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.498 [2024-05-15 01:30:40.179016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.498 [2024-05-15 01:30:40.179038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.188196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.188436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.188456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.197627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.197875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.197894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.206882] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.207133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.207153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.216226] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.216480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.216501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.225635] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.225888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.225908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.234981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.235238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.235258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.244433] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.244689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.244709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.253834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.254078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.254098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.263240] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.263478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.263498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.272547] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.272803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.272823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.281932] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.282170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.282197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.291263] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.291518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.291538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.300607] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.300833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.300853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.309899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.310131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.310150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.319152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.319412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.319431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.328435] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.328670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.328691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.337733] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.337987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.338009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.347026] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.347280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.347301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.356386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.356650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.356670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.365676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.365928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.758 [2024-05-15 01:30:40.365948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.758 [2024-05-15 01:30:40.375011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.758 [2024-05-15 01:30:40.375264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.375283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.384336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.384577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.384597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.393550] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.393804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.393824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.402811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.403061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.403084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.412130] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.412395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.412415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.421492] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.421749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.421769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.430786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.431025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.431045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.759 [2024-05-15 01:30:40.440092] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:04.759 [2024-05-15 01:30:40.440352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:04.759 [2024-05-15 01:30:40.440372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.449572] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.449812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.449831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.459071] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.459339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.459358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.468372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.468630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.468650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.477694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.477935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.477955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.487027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.487302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.487322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.496393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.496651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.496671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.505749] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.506003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.506022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.515262] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.515528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.515548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.524671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.524928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.524948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.534033] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.534275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.534295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.543285] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.543537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.543557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.552751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.553009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.562112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.562366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.562386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.571395] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.571659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.571679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.580702] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.580934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.580954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.590032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.590284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.590304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.599283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.599534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.599554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.608633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.608884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.608903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.617826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.618104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.627047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.627303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.627322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.636303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.636565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.636585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.645581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.645837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.645863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.654839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.655096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.655117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.664187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.664446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.664476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.021 [2024-05-15 01:30:40.673586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.021 [2024-05-15 01:30:40.673836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.021 [2024-05-15 01:30:40.673857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.022 [2024-05-15 01:30:40.682929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.022 [2024-05-15 01:30:40.683160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.022 [2024-05-15 01:30:40.683180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.022 [2024-05-15 01:30:40.692093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.022 [2024-05-15 01:30:40.692359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.022 [2024-05-15 01:30:40.692378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.022 [2024-05-15 01:30:40.701345] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.022 [2024-05-15 01:30:40.701597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.022 [2024-05-15 01:30:40.701617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.710921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.711161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.711182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.720491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.720733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.720752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.729887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.730132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.730152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.739330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.739585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.739606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.748587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.748842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.748861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.757771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.758008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.758027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.767015] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.767256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.767277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.776162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.776402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.776422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.785251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.785484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.785503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.794351] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.794585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.794605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.803425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.803654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.803674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.812514] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.287 [2024-05-15 01:30:40.812750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.287 [2024-05-15 01:30:40.812770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.287 [2024-05-15 01:30:40.821604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.821836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.821856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.830686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.830916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.830936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.839771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.840001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.840020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.848999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.849234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.849252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.858066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.858304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.858323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.867178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.867421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.867440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.876232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.876477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.876497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.885328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.885556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.885578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.894406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.894634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.894654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.903505] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.903737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.903756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.912587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.912819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.912838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.921683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.921916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.921935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.930785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.931018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.931037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.939869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.940103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.940122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.948948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.949179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.949203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.958036] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.958268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.958292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.967108] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.967350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.967370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.288 [2024-05-15 01:30:40.976353] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.288 [2024-05-15 01:30:40.976588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.288 [2024-05-15 01:30:40.976608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.548 [2024-05-15 01:30:40.985679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.548 [2024-05-15 01:30:40.985931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.548 [2024-05-15 01:30:40.985951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.548 [2024-05-15 01:30:40.994873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.548 [2024-05-15 01:30:40.995109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.548 [2024-05-15 01:30:40.995128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.548 [2024-05-15 01:30:41.003941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.004171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.004196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.013042] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.013274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.013294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.022312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.022543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.022563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.031408] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.031645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.031666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.040499] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.040731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.040751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.049452] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.049680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.049700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.058544] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.058773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.058795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.067608] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.067841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.067861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.076708] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.076939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.076960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.085712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.085963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.085983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.094828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.095063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.095083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.103895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.104133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.104152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.112993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.113224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.113244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.122050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.122283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.122306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.131152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.131391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.140219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.140452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.140471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.149304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.149537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.149557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.158380] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.158612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.158632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.167493] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.167724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.167744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.176725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.176959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.176978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.186059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.186299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.186319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.195182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.195420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.195439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 [2024-05-15 01:30:41.204275] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed56e0) with pdu=0x2000190f8e88 00:28:05.549 [2024-05-15 01:30:41.204515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:05.549 [2024-05-15 01:30:41.204535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:05.549 00:28:05.549 Latency(us) 00:28:05.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.549 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.549 nvme0n1 : 2.00 27448.98 107.22 0.00 0.00 4655.08 2542.80 17511.22 00:28:05.549 =================================================================================================================== 00:28:05.549 Total : 27448.98 107.22 0.00 0.00 4655.08 2542.80 17511.22 00:28:05.549 0 00:28:05.549 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:05.549 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:05.549 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:05.549 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:05.549 | .driver_specific 00:28:05.549 | .nvme_error 00:28:05.549 | .status_code 00:28:05.549 | .command_transient_transport_error' 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 67879 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 67879 ']' 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 67879 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 67879 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 67879' 00:28:05.810 killing process with pid 67879 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 67879 00:28:05.810 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.810 00:28:05.810 Latency(us) 00:28:05.810 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.810 =================================================================================================================== 00:28:05.810 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.810 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 67879 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=68566 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 68566 /var/tmp/bperf.sock 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 68566 ']' 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:06.069 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.070 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:06.070 01:30:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.070 [2024-05-15 01:30:41.709754] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:06.070 [2024-05-15 01:30:41.709806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68566 ] 00:28:06.070 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:06.070 Zero copy mechanism will not be used. 00:28:06.070 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.351 [2024-05-15 01:30:41.780185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.351 [2024-05-15 01:30:41.854355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.919 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:06.920 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:28:06.920 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.920 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:07.179 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:07.179 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.179 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.179 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.179 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.179 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.438 nvme0n1 00:28:07.438 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:07.438 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:07.438 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:07.438 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:07.438 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:07.438 01:30:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.438 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:07.438 Zero copy mechanism will not be used. 00:28:07.438 Running I/O for 2 seconds... 00:28:07.438 [2024-05-15 01:30:43.080491] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.438 [2024-05-15 01:30:43.080881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-05-15 01:30:43.080909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.438 [2024-05-15 01:30:43.094060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.438 [2024-05-15 01:30:43.094510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-05-15 01:30:43.094534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.438 [2024-05-15 01:30:43.106694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.438 [2024-05-15 01:30:43.107153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-05-15 01:30:43.107176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.438 [2024-05-15 01:30:43.120276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.438 [2024-05-15 01:30:43.120757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.438 [2024-05-15 01:30:43.120779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.134244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.134652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.134674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.149030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.149279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.149302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.162689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.163121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.163144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.176381] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.176818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.176840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.190695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.190886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.190906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.205077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.205655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.205676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.217675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.218122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.218143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.231564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.232064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.232083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.244304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.245151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.245171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.257871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.258497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.258518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.271474] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.272118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.272139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.285779] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.286596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.286616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.300168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.300784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.300805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.314923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.315504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.315529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.328016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.328587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.328606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.342489] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.343059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.343080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.356575] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.357078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.357098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.369128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.369744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.369765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.699 [2024-05-15 01:30:43.382391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.699 [2024-05-15 01:30:43.382875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.699 [2024-05-15 01:30:43.382895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.957 [2024-05-15 01:30:43.395909] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.957 [2024-05-15 01:30:43.396396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.957 [2024-05-15 01:30:43.396416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.957 [2024-05-15 01:30:43.409354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.957 [2024-05-15 01:30:43.409926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.957 [2024-05-15 01:30:43.409947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.957 [2024-05-15 01:30:43.422644] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.423226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.423246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.437000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.437562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.437583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.450559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.451071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.451092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.464675] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.465272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.465293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.478228] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.478777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.478798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.492328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.492962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.492982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.506503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.506904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.506924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.521060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.521498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.521517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.534895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.535525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.535546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.549571] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.550160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.550184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.563653] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.564175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.564200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.577152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.577676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.577696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.590873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.591460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.591480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.603868] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.604365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.604385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.617833] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.618295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.618315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.632742] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.633471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.633491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:07.958 [2024-05-15 01:30:43.647320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:07.958 [2024-05-15 01:30:43.647850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:07.958 [2024-05-15 01:30:43.647871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.660303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.660913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.660933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.674827] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.675404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.675427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.689442] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.689946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.689967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.704134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.704593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.704613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.719247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.719641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.719662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.733050] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.733513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.733534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.748074] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.748700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.748720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.763246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.763743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.763762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.778363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.778877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.778897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.792585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.793095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.793116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.807158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.807770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.807791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.822010] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.822653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.822673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.837336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.837961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.837981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.851687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.852228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.852247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.865981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.866575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.866595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.880614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.881195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.217 [2024-05-15 01:30:43.881215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.217 [2024-05-15 01:30:43.895717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.217 [2024-05-15 01:30:43.896158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.218 [2024-05-15 01:30:43.896178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.477 [2024-05-15 01:30:43.910715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.477 [2024-05-15 01:30:43.911428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.477 [2024-05-15 01:30:43.911448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.477 [2024-05-15 01:30:43.925747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.477 [2024-05-15 01:30:43.926267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.477 [2024-05-15 01:30:43.926291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.477 [2024-05-15 01:30:43.940725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.477 [2024-05-15 01:30:43.941172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.477 [2024-05-15 01:30:43.941195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.477 [2024-05-15 01:30:43.955422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.477 [2024-05-15 01:30:43.955879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.477 [2024-05-15 01:30:43.955898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.477 [2024-05-15 01:30:43.968233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.477 [2024-05-15 01:30:43.968841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:43.968863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:43.983797] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:43.984282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:43.984303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:43.999457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.000016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.000037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.013557] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.014040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.014060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.027266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.027845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.027864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.041148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.041725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.041746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.055168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.055737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.055757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.070260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.070714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.070734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.084160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.084662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.084681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.098303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.098875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.098895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.114601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.115220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.115240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.129503] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.130063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.130084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.144870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.145573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.145593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.478 [2024-05-15 01:30:44.160536] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.478 [2024-05-15 01:30:44.161025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.478 [2024-05-15 01:30:44.161045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.175110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.175695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.175715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.189315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.189842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.189862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.203924] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.204503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.204523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.218168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.218686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.218706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.233552] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.234047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.234067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.248154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.248607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.248627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.263239] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.263660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.263680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.278890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.279397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.279420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.293873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.294276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.294296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.309372] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.309878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.309905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.323948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.324422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.324442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.338246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.338653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.338673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.353334] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.353906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.353926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.366943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.367441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.367462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.380895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.381434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.381454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.394854] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.395403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.395423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.408959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.409537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.409557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.739 [2024-05-15 01:30:44.423830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.739 [2024-05-15 01:30:44.424328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.739 [2024-05-15 01:30:44.424350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.438443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.438905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.438925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.452921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.453674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.453693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.467225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.467841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.467861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.480441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.480978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.480998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.494415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.494956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.494976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.509402] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.509921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.509940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.521612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.522253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.522272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.535996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.536651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.536671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.551258] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.551799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.551819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.566063] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.566435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.566455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.580522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.581119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.581139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.595667] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.596214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.596235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.608368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.608919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.608940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.622478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.622870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.622890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.637055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.637638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.637659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:08.999 [2024-05-15 01:30:44.651641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:08.999 [2024-05-15 01:30:44.652154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:08.999 [2024-05-15 01:30:44.652174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.000 [2024-05-15 01:30:44.666018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.000 [2024-05-15 01:30:44.666522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.000 [2024-05-15 01:30:44.666542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.000 [2024-05-15 01:30:44.678685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.000 [2024-05-15 01:30:44.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.000 [2024-05-15 01:30:44.679053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.692881] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.693324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.693346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.707712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.708261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.708282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.721373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.721936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.721958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.735898] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.736365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.736386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.750539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.751135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.751155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.765756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.766272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.766292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.780814] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.781481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.781500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.796631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.797211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.797239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.811462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.811942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.811962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.825150] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.825727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.825748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.840266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.840854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.840874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.854174] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.854763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.854783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.867800] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.868377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.868398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.883128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.883760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.883780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.897585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.898151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.898171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.911727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.912315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.912335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.927004] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.927521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.927541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.259 [2024-05-15 01:30:44.942272] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.259 [2024-05-15 01:30:44.942899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.259 [2024-05-15 01:30:44.942919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:44.957320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:44.957774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:44.957794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:44.971646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:44.972255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:44.972275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:44.985576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:44.986063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:44.986086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:44.998862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:44.999399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:44.999420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:45.011101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:45.011710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:45.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:45.025224] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:45.025584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:45.025605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:45.037232] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:45.037635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:45.037656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:09.519 [2024-05-15 01:30:45.051160] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed5b50) with pdu=0x2000190fef90 00:28:09.519 [2024-05-15 01:30:45.051473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:09.519 [2024-05-15 01:30:45.051496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:09.519 00:28:09.519 Latency(us) 00:28:09.519 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.519 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:09.519 nvme0n1 : 2.01 2148.40 268.55 0.00 0.00 7431.76 5033.16 31667.00 00:28:09.519 =================================================================================================================== 00:28:09.519 Total : 2148.40 268.55 0.00 0.00 7431.76 5033.16 31667.00 00:28:09.519 0 00:28:09.519 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:09.519 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:09.519 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:09.519 | .driver_specific 00:28:09.519 | .nvme_error 00:28:09.519 | .status_code 00:28:09.519 | .command_transient_transport_error' 00:28:09.519 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 68566 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 68566 ']' 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 68566 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 68566 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 68566' 00:28:09.779 killing process with pid 68566 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 68566 00:28:09.779 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.779 00:28:09.779 Latency(us) 00:28:09.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.779 =================================================================================================================== 00:28:09.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.779 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 68566 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 66318 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 66318 ']' 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 66318 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66318 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66318' 00:28:10.039 killing process with pid 66318 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 66318 00:28:10.039 [2024-05-15 01:30:45.545868] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:10.039 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 66318 00:28:10.298 00:28:10.298 real 0m16.986s 00:28:10.298 user 0m32.172s 00:28:10.298 sys 0m4.741s 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.298 ************************************ 00:28:10.298 END TEST nvmf_digest_error 00:28:10.298 ************************************ 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.298 rmmod nvme_tcp 00:28:10.298 rmmod nvme_fabrics 00:28:10.298 rmmod nvme_keyring 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 66318 ']' 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 66318 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 66318 ']' 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 66318 00:28:10.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (66318) - No such process 00:28:10.298 01:30:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 66318 is not found' 00:28:10.299 Process with pid 66318 is not found 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:10.299 01:30:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.836 01:30:47 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.836 00:28:12.836 real 0m43.293s 00:28:12.836 user 1m6.026s 00:28:12.836 sys 0m14.894s 00:28:12.836 01:30:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:12.836 01:30:47 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:12.836 ************************************ 00:28:12.836 END TEST nvmf_digest 00:28:12.836 ************************************ 00:28:12.836 01:30:47 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:12.836 01:30:47 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:12.836 01:30:47 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:12.836 01:30:47 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:12.836 01:30:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:12.836 01:30:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:12.836 01:30:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:12.836 ************************************ 00:28:12.836 START TEST nvmf_bdevperf 00:28:12.836 ************************************ 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:12.836 * Looking for test storage... 00:28:12.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.836 01:30:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.837 01:30:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:19.409 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:19.409 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:19.409 Found net devices under 0000:af:00.0: cvl_0_0 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:19.409 Found net devices under 0000:af:00.1: cvl_0_1 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:19.409 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:28:19.410 00:28:19.410 --- 10.0.0.2 ping statistics --- 00:28:19.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.410 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:28:19.410 00:28:19.410 --- 10.0.0.1 ping statistics --- 00:28:19.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.410 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=72907 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 72907 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 72907 ']' 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:19.410 01:30:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.410 [2024-05-15 01:30:54.763575] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:19.410 [2024-05-15 01:30:54.763620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.410 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.410 [2024-05-15 01:30:54.837577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:19.410 [2024-05-15 01:30:54.911121] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.410 [2024-05-15 01:30:54.911156] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.410 [2024-05-15 01:30:54.911166] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.410 [2024-05-15 01:30:54.911175] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.410 [2024-05-15 01:30:54.911182] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.410 [2024-05-15 01:30:54.911293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.410 [2024-05-15 01:30:54.911386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.410 [2024-05-15 01:30:54.911388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.978 [2024-05-15 01:30:55.614585] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.978 Malloc0 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.978 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:20.237 [2024-05-15 01:30:55.683842] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:20.237 [2024-05-15 01:30:55.684080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.237 { 00:28:20.237 "params": { 00:28:20.237 "name": "Nvme$subsystem", 00:28:20.237 "trtype": "$TEST_TRANSPORT", 00:28:20.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.237 "adrfam": "ipv4", 00:28:20.237 "trsvcid": "$NVMF_PORT", 00:28:20.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.237 "hdgst": ${hdgst:-false}, 00:28:20.237 "ddgst": ${ddgst:-false} 00:28:20.237 }, 00:28:20.237 "method": "bdev_nvme_attach_controller" 00:28:20.237 } 00:28:20.237 EOF 00:28:20.237 )") 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:20.237 01:30:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.237 "params": { 00:28:20.237 "name": "Nvme1", 00:28:20.237 "trtype": "tcp", 00:28:20.237 "traddr": "10.0.0.2", 00:28:20.237 "adrfam": "ipv4", 00:28:20.237 "trsvcid": "4420", 00:28:20.237 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.237 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.237 "hdgst": false, 00:28:20.237 "ddgst": false 00:28:20.237 }, 00:28:20.237 "method": "bdev_nvme_attach_controller" 00:28:20.237 }' 00:28:20.237 [2024-05-15 01:30:55.733739] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:20.237 [2024-05-15 01:30:55.733787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72966 ] 00:28:20.237 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.237 [2024-05-15 01:30:55.803844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.237 [2024-05-15 01:30:55.874257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.497 Running I/O for 1 seconds... 00:28:21.434 00:28:21.434 Latency(us) 00:28:21.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.434 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:21.434 Verification LBA range: start 0x0 length 0x4000 00:28:21.434 Nvme1n1 : 1.01 12277.70 47.96 0.00 0.00 10375.77 1861.22 20447.23 00:28:21.434 =================================================================================================================== 00:28:21.434 Total : 12277.70 47.96 0.00 0.00 10375.77 1861.22 20447.23 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=73230 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:21.693 { 00:28:21.693 "params": { 00:28:21.693 "name": "Nvme$subsystem", 00:28:21.693 "trtype": "$TEST_TRANSPORT", 00:28:21.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:21.693 "adrfam": "ipv4", 00:28:21.693 "trsvcid": "$NVMF_PORT", 00:28:21.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:21.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:21.693 "hdgst": ${hdgst:-false}, 00:28:21.693 "ddgst": ${ddgst:-false} 00:28:21.693 }, 00:28:21.693 "method": "bdev_nvme_attach_controller" 00:28:21.693 } 00:28:21.693 EOF 00:28:21.693 )") 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:21.693 01:30:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:21.693 "params": { 00:28:21.693 "name": "Nvme1", 00:28:21.693 "trtype": "tcp", 00:28:21.693 "traddr": "10.0.0.2", 00:28:21.693 "adrfam": "ipv4", 00:28:21.693 "trsvcid": "4420", 00:28:21.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:21.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:21.693 "hdgst": false, 00:28:21.693 "ddgst": false 00:28:21.693 }, 00:28:21.694 "method": "bdev_nvme_attach_controller" 00:28:21.694 }' 00:28:21.694 [2024-05-15 01:30:57.294335] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:21.694 [2024-05-15 01:30:57.294389] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73230 ] 00:28:21.694 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.694 [2024-05-15 01:30:57.363459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.952 [2024-05-15 01:30:57.429968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.952 Running I/O for 15 seconds... 00:28:25.282 01:31:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 72907 00:28:25.282 01:31:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:25.282 [2024-05-15 01:31:00.264531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.264898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.264918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.264940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.264961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.264980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.264991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.282 [2024-05-15 01:31:00.265261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.282 [2024-05-15 01:31:00.265379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.282 [2024-05-15 01:31:00.265390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.265982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.265991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.283 [2024-05-15 01:31:00.266187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.283 [2024-05-15 01:31:00.266201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.266989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.266998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.284 [2024-05-15 01:31:00.267008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.284 [2024-05-15 01:31:00.267017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.285 [2024-05-15 01:31:00.267036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.285 [2024-05-15 01:31:00.267056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.285 [2024-05-15 01:31:00.267075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.285 [2024-05-15 01:31:00.267094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:25.285 [2024-05-15 01:31:00.267114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b48610 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.267135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:25.285 [2024-05-15 01:31:00.267143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:25.285 [2024-05-15 01:31:00.267150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121960 len:8 PRP1 0x0 PRP2 0x0 00:28:25.285 [2024-05-15 01:31:00.267160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267214] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b48610 was disconnected and freed. reset controller. 00:28:25.285 [2024-05-15 01:31:00.267261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.285 [2024-05-15 01:31:00.267273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.285 [2024-05-15 01:31:00.267293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.285 [2024-05-15 01:31:00.267311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:25.285 [2024-05-15 01:31:00.267330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:25.285 [2024-05-15 01:31:00.267339] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.270011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.270037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.270853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.271229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.271243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.271254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.271428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.271600] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.285 [2024-05-15 01:31:00.271610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.285 [2024-05-15 01:31:00.271620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.285 [2024-05-15 01:31:00.274319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.285 [2024-05-15 01:31:00.283235] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.283849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.284297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.284311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.284321] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.284493] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.284665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.285 [2024-05-15 01:31:00.284676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.285 [2024-05-15 01:31:00.284689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.285 [2024-05-15 01:31:00.287394] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.285 [2024-05-15 01:31:00.296247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.296882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.297292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.297306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.297315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.297489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.297661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.285 [2024-05-15 01:31:00.297671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.285 [2024-05-15 01:31:00.297681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.285 [2024-05-15 01:31:00.300342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.285 [2024-05-15 01:31:00.309177] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.309821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.310295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.310338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.310371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.310967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.311561] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.285 [2024-05-15 01:31:00.311572] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.285 [2024-05-15 01:31:00.311581] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.285 [2024-05-15 01:31:00.314255] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.285 [2024-05-15 01:31:00.322145] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.322786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.323296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.323338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.323370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.323677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.323918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.285 [2024-05-15 01:31:00.323932] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.285 [2024-05-15 01:31:00.323944] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.285 [2024-05-15 01:31:00.327720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.285 [2024-05-15 01:31:00.335519] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.336111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.336551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.336594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.336626] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.337234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.285 [2024-05-15 01:31:00.337665] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.285 [2024-05-15 01:31:00.337676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.285 [2024-05-15 01:31:00.337685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.285 [2024-05-15 01:31:00.340414] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.285 [2024-05-15 01:31:00.348349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.285 [2024-05-15 01:31:00.348958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.349475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.285 [2024-05-15 01:31:00.349518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.285 [2024-05-15 01:31:00.349550] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.285 [2024-05-15 01:31:00.350144] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.350611] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.350622] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.350630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.353291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.361154] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.361779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.362285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.362328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.362360] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.362565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.362738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.362748] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.362757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.365418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.374020] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.374577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.375085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.375126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.375159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.375710] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.375884] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.375894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.375903] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.378557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.386952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.387545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.387980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.388021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.388053] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.388516] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.388688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.388699] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.388708] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.391368] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.399877] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.400503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.401002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.401042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.401075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.401477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.401645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.401655] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.401664] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.404332] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.412716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.413266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.413764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.413804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.413836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.414449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.414661] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.414687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.414696] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.417362] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.425579] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.426213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.426705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.426745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.426777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.427387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.427749] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.427759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.427768] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.430432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.438373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.438996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.439502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.439545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.439577] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.440172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.440791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.440801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.440810] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.443464] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.451227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.451834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.452349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.452398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.452430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.286 [2024-05-15 01:31:00.452667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.286 [2024-05-15 01:31:00.452839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.286 [2024-05-15 01:31:00.452849] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.286 [2024-05-15 01:31:00.452858] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.286 [2024-05-15 01:31:00.455540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.286 [2024-05-15 01:31:00.464086] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.286 [2024-05-15 01:31:00.464688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.465150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.286 [2024-05-15 01:31:00.465204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.286 [2024-05-15 01:31:00.465237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.465834] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.466195] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.466210] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.466222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.469995] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.477639] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.478288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.478790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.478829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.478861] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.479472] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.479900] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.479910] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.479919] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.482561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.490405] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.491001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.491441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.491484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.491523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.492000] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.492169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.492179] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.492188] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.494918] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.503306] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.503936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.504449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.504491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.504523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.504926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.505094] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.505104] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.505113] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.507743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.516142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.516804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.517298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.517342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.517374] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.517971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.518287] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.518298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.518307] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.521021] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.529173] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.529791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.530186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.530239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.530272] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.530874] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.531146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.531156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.531165] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.533884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.542224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.542826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.543265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.543307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.543340] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.543853] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.544026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.544037] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.544046] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.546756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.555103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.555714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.556200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.556214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.556223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.556400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.556568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.556578] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.556587] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.559233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.567997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.568632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.569143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.569183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.569229] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.569819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.569995] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.570006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.570015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.572710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.580942] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.581551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.581987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.582027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.582059] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.582670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.583121] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.583131] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.583140] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.585791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.593788] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.594408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.594782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.594794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.594803] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.594971] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.595138] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.595148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.595157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.597842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.606627] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.607242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.607662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.607703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.607735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.608251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.608419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.608432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.608441] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.611094] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.619637] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.620271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.620770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.620811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.620845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.621069] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.621272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.621284] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.621293] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.623957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.632570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.287 [2024-05-15 01:31:00.633215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.633714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.287 [2024-05-15 01:31:00.633753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.287 [2024-05-15 01:31:00.633785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.287 [2024-05-15 01:31:00.634143] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.287 [2024-05-15 01:31:00.634321] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.287 [2024-05-15 01:31:00.634332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.287 [2024-05-15 01:31:00.634341] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.287 [2024-05-15 01:31:00.637040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.287 [2024-05-15 01:31:00.645506] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.646141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.646609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.646622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.646632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.646803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.646976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.646986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.646998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.649678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.658494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.659146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.659577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.659618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.659650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.660139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.660329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.660340] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.660349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.663057] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.671484] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.672103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.672593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.672636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.672669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.673184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.673356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.673367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.673375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.676038] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.684452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.685002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.685516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.685558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.685591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.686211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.686523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.686534] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.686542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.689247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.697374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.697944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.698395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.698437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.698470] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.698744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.698984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.698998] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.699011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.702805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.710600] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.711148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.711505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.711519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.711529] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.711700] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.711873] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.711883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.711892] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.714564] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.723625] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.724269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.724736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.724776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.724808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.725275] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.725447] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.725459] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.725468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.728131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.736562] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.737204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.737615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.737628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.737637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.737809] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.737981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.737992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.738001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.740655] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.749446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.749979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.750447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.750490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.750523] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.751119] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.751543] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.751554] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.751563] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.754257] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.762352] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.762993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.763425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.763438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.763448] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.763620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.763791] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.763801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.763811] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.766430] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.775390] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.776075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.776490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.776504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.776515] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.776688] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.776861] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.776873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.776882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.779577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.788327] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.788906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.789347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.789360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.789370] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.789541] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.789712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.789722] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.789731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.792432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.801378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.801994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.802349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.802362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.802371] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.802543] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.802738] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.288 [2024-05-15 01:31:00.802749] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.288 [2024-05-15 01:31:00.802759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.288 [2024-05-15 01:31:00.805631] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.288 [2024-05-15 01:31:00.814618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.288 [2024-05-15 01:31:00.815269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.815748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.288 [2024-05-15 01:31:00.815802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.288 [2024-05-15 01:31:00.815835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.288 [2024-05-15 01:31:00.816450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.288 [2024-05-15 01:31:00.816633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.816644] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.816654] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.819443] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.827567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.828187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.828639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.828679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.828711] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.829320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.829816] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.829826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.829835] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.832534] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.840496] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.841126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.841615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.841657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.841689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.842299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.842833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.842843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.842852] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.845536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.853391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.853894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.854381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.854422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.854462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.854992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.855161] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.855172] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.855181] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.857873] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.866218] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.866751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.867061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.867101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.867133] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.867643] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.867811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.867821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.867830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.870535] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.879156] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.879736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.880135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.880175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.880220] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.880658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.880825] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.880835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.880845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.883559] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.891971] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.892516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.892943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.892983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.893016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.893637] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.893811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.893821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.893830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.896509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.904914] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.905533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.906005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.906045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.906078] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.906620] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.906794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.906804] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.906814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.909458] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.917878] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.918501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.918976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.919010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.919020] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.919195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.919376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.919386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.919395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.922014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.930852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.931413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.931771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.931784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.931794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.931961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.932131] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.932142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.932150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.934830] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.943794] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.944444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.944864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.944904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.944936] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.945140] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.945329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.945350] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.945359] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.948009] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.289 [2024-05-15 01:31:00.956761] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.289 [2024-05-15 01:31:00.957377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.957739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.289 [2024-05-15 01:31:00.957751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.289 [2024-05-15 01:31:00.957761] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.289 [2024-05-15 01:31:00.957930] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.289 [2024-05-15 01:31:00.958098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.289 [2024-05-15 01:31:00.958108] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.289 [2024-05-15 01:31:00.958117] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.289 [2024-05-15 01:31:00.960810] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:00.969692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:00.970335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:00.970745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:00.970758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:00.970768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:00.970939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:00.971111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:00.971125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:00.971134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:00.973853] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:00.982517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:00.983175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:00.983610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:00.983622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:00.983632] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:00.983805] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:00.983976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:00.983987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:00.983995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:00.986702] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:00.995371] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:00.996422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:00.996789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:00.996803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:00.996814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:00.996994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:00.997169] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:00.997180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:00.997189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:00.999887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:01.008273] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:01.008897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.009306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.009319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:01.009330] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:01.009502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:01.009676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:01.009687] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:01.009699] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:01.012395] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:01.021167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:01.021863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.022278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.022292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:01.022303] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:01.022481] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:01.022654] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:01.022665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:01.022675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:01.025376] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:01.034157] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:01.034815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.035197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.035210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:01.035221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:01.035413] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:01.035586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:01.035597] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:01.035608] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:01.038333] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:01.047206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:01.047775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.048136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.048149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:01.048159] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:01.048348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:01.048532] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.549 [2024-05-15 01:31:01.048542] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.549 [2024-05-15 01:31:01.048552] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.549 [2024-05-15 01:31:01.051386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.549 [2024-05-15 01:31:01.060317] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.549 [2024-05-15 01:31:01.060961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.061394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.549 [2024-05-15 01:31:01.061408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.549 [2024-05-15 01:31:01.061418] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.549 [2024-05-15 01:31:01.061601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.549 [2024-05-15 01:31:01.061783] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.061793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.061814] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.064677] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.073395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.073883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.074150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.074163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.074172] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.074349] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.074521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.074531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.074540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.077232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.086376] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.086934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.087362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.087377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.087387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.087563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.087735] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.087746] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.087756] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.090461] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.099361] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.099992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.100420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.100433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.100443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.100615] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.100789] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.100800] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.100809] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.103518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.112354] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.112958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.113389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.113402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.113412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.113583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.113755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.113766] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.113775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.116480] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.125399] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.125952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.126327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.126369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.126402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.126997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.127296] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.127306] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.127315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.130010] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.138460] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.139024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.139338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.139351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.139361] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.139533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.139705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.139716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.139724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.142434] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.151499] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.152046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.152516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.152558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.152590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.153186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.153613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.153624] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.153634] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.156341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.164475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.165040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.165455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.165501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.165534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.165763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.165932] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.165943] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.165951] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.168664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.177480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.178108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.178458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.178475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.550 [2024-05-15 01:31:01.178485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.550 [2024-05-15 01:31:01.178658] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.550 [2024-05-15 01:31:01.178832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.550 [2024-05-15 01:31:01.178842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.550 [2024-05-15 01:31:01.178851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.550 [2024-05-15 01:31:01.181555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.550 [2024-05-15 01:31:01.190477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.550 [2024-05-15 01:31:01.191118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.191549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.550 [2024-05-15 01:31:01.191591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.551 [2024-05-15 01:31:01.191623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.551 [2024-05-15 01:31:01.191814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.551 [2024-05-15 01:31:01.191985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.551 [2024-05-15 01:31:01.191996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.551 [2024-05-15 01:31:01.192005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.551 [2024-05-15 01:31:01.194698] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.551 [2024-05-15 01:31:01.203470] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.551 [2024-05-15 01:31:01.204130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-05-15 01:31:01.204559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-05-15 01:31:01.204601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.551 [2024-05-15 01:31:01.204633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.551 [2024-05-15 01:31:01.205196] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.551 [2024-05-15 01:31:01.205435] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.551 [2024-05-15 01:31:01.205449] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.551 [2024-05-15 01:31:01.205461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.551 [2024-05-15 01:31:01.209238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.551 [2024-05-15 01:31:01.216924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.551 [2024-05-15 01:31:01.217497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-05-15 01:31:01.217938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-05-15 01:31:01.217978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.551 [2024-05-15 01:31:01.218018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.551 [2024-05-15 01:31:01.218476] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.551 [2024-05-15 01:31:01.218644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.551 [2024-05-15 01:31:01.218654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.551 [2024-05-15 01:31:01.218663] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.551 [2024-05-15 01:31:01.221303] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.551 [2024-05-15 01:31:01.229683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.551 [2024-05-15 01:31:01.230281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-05-15 01:31:01.230719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.551 [2024-05-15 01:31:01.230759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.551 [2024-05-15 01:31:01.230791] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.551 [2024-05-15 01:31:01.231400] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.551 [2024-05-15 01:31:01.231800] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.551 [2024-05-15 01:31:01.231810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.551 [2024-05-15 01:31:01.231819] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.551 [2024-05-15 01:31:01.234505] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.810 [2024-05-15 01:31:01.242570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.243221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.243495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.243535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.243567] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.243802] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.243975] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.243986] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.243995] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.246693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.255432] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.256059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.256319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.256361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.256395] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.256601] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.256769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.256779] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.256788] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.259396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.268152] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.268763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.269180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.269235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.269267] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.269649] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.269831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.269842] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.269851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.272604] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.281050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.281632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.282065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.282105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.282138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.282749] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.282945] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.282956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.282965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.285690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.293918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.294557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.294990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.295031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.295063] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.295612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.295788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.295799] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.295808] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.298457] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.306711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.307319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.307804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.307816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.307825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.307992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.308159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.308169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.308178] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.310854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.319565] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.320214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.320698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.320709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.320719] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.320886] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.321054] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.321064] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.321073] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.323778] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.332582] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.333219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.333664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.333705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.333737] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.334351] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.334871] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.334885] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.334894] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.337590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.345381] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.346020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.346508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.346551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.811 [2024-05-15 01:31:01.346584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.811 [2024-05-15 01:31:01.347163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.811 [2024-05-15 01:31:01.347410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.811 [2024-05-15 01:31:01.347425] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.811 [2024-05-15 01:31:01.347437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.811 [2024-05-15 01:31:01.351214] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.811 [2024-05-15 01:31:01.358688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.811 [2024-05-15 01:31:01.359332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.359769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.811 [2024-05-15 01:31:01.359809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.359841] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.360313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.360481] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.360491] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.360500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.363153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.371564] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.372210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.372702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.372742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.372775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.373009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.373182] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.373198] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.373211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.375893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.384320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.384970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.385418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.385460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.385492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.385939] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.386108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.386122] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.386131] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.388834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.397195] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.397845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.398329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.398371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.398403] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.398997] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.399354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.399364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.399373] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.402041] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.410104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.410750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.411132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.411172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.411216] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.411810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.412301] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.412311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.412320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.414992] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.423060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.423692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.423966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.424005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.424037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.424268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.424441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.424451] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.424461] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.427137] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.435930] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.436568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.437004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.437043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.437077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.437517] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.437708] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.437719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.437728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.440370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.448792] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.449417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.449684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.449724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.449756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.450093] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.450275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.450285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.450294] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.452956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.461572] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.462218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.462705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.462745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.462777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.463210] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.463399] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.463409] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.812 [2024-05-15 01:31:01.463418] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.812 [2024-05-15 01:31:01.466061] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.812 [2024-05-15 01:31:01.474314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.812 [2024-05-15 01:31:01.474957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.475389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.812 [2024-05-15 01:31:01.475403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.812 [2024-05-15 01:31:01.475412] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.812 [2024-05-15 01:31:01.475585] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.812 [2024-05-15 01:31:01.475758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.812 [2024-05-15 01:31:01.475769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.813 [2024-05-15 01:31:01.475778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.813 [2024-05-15 01:31:01.478446] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.813 [2024-05-15 01:31:01.487001] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.813 [2024-05-15 01:31:01.487611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.813 [2024-05-15 01:31:01.488049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.813 [2024-05-15 01:31:01.488089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:25.813 [2024-05-15 01:31:01.488121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:25.813 [2024-05-15 01:31:01.488464] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:25.813 [2024-05-15 01:31:01.488646] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.813 [2024-05-15 01:31:01.488656] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.813 [2024-05-15 01:31:01.488665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.813 [2024-05-15 01:31:01.491225] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.813 [2024-05-15 01:31:01.499923] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.813 [2024-05-15 01:31:01.500566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.500995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.501008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.501018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.501189] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.501368] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.501378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.501387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.504015] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.512678] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.513252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.513678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.513690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.513699] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.513856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.514015] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.514024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.514033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.516606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.525389] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.525945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.526318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.526359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.526391] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.526986] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.527219] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.527230] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.527239] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.529982] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.538430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.539021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.539391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.539442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.539476] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.540015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.540261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.540276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.540288] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.544069] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.551691] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.552333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.552736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.552748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.552758] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.552925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.553091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.553101] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.553110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.555673] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.564501] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.565137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.565642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.565680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.565689] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.565856] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.566023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.566033] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.566042] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.568601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.577305] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.577924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.578395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.578437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.578477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.579072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.579582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.073 [2024-05-15 01:31:01.579592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.073 [2024-05-15 01:31:01.579600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.073 [2024-05-15 01:31:01.582081] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.073 [2024-05-15 01:31:01.590061] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.073 [2024-05-15 01:31:01.590718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.591160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.073 [2024-05-15 01:31:01.591212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.073 [2024-05-15 01:31:01.591249] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.073 [2024-05-15 01:31:01.591416] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.073 [2024-05-15 01:31:01.591583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.591593] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.591602] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.594159] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.602773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.603383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.603871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.603911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.603944] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.604538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.604706] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.604717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.604726] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.607281] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.615461] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.616058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.616548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.616590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.616623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.617251] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.617713] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.617723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.617731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.620213] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.628194] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.628837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.629328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.629370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.629402] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.629755] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.629914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.629923] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.629932] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.632440] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.640959] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.641518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.641951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.641990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.642022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.642633] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.642996] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.643006] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.643015] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.645574] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.653692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.654295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.654655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.654696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.654728] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.655348] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.655519] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.655530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.655538] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.658091] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.666444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.667079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.667511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.667553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.667585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.668180] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.668794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.668828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.668859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.671469] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.679247] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.679860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.680261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.680274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.680283] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.680449] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.680618] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.680628] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.680637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.683200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.692064] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.692717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.693081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.693121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.693153] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.693606] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.693775] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.693789] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.693798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.696427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.705065] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.705699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.706105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.706146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.074 [2024-05-15 01:31:01.706178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.074 [2024-05-15 01:31:01.706774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.074 [2024-05-15 01:31:01.706942] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.074 [2024-05-15 01:31:01.706952] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.074 [2024-05-15 01:31:01.706961] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.074 [2024-05-15 01:31:01.709523] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.074 [2024-05-15 01:31:01.717962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.074 [2024-05-15 01:31:01.718559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.074 [2024-05-15 01:31:01.719051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.719091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.075 [2024-05-15 01:31:01.719123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.075 [2024-05-15 01:31:01.719737] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.075 [2024-05-15 01:31:01.719985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.075 [2024-05-15 01:31:01.719996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.075 [2024-05-15 01:31:01.720005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.075 [2024-05-15 01:31:01.722562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.075 [2024-05-15 01:31:01.730724] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.075 [2024-05-15 01:31:01.731372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.731786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.731826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.075 [2024-05-15 01:31:01.731858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.075 [2024-05-15 01:31:01.732468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.075 [2024-05-15 01:31:01.732955] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.075 [2024-05-15 01:31:01.732970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.075 [2024-05-15 01:31:01.732986] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.075 [2024-05-15 01:31:01.736763] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.075 [2024-05-15 01:31:01.743767] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.075 [2024-05-15 01:31:01.744409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.744867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.744907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.075 [2024-05-15 01:31:01.744939] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.075 [2024-05-15 01:31:01.745549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.075 [2024-05-15 01:31:01.745950] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.075 [2024-05-15 01:31:01.745960] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.075 [2024-05-15 01:31:01.745969] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.075 [2024-05-15 01:31:01.748561] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.075 [2024-05-15 01:31:01.756586] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.075 [2024-05-15 01:31:01.757232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.757718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.075 [2024-05-15 01:31:01.757758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.075 [2024-05-15 01:31:01.757790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.075 [2024-05-15 01:31:01.758128] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.075 [2024-05-15 01:31:01.758306] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.075 [2024-05-15 01:31:01.758317] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.075 [2024-05-15 01:31:01.758326] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.075 [2024-05-15 01:31:01.761027] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.769540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.770172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.770593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.770606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.770616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.770797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.336 [2024-05-15 01:31:01.770966] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.336 [2024-05-15 01:31:01.770976] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.336 [2024-05-15 01:31:01.770984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.336 [2024-05-15 01:31:01.773550] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.782296] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.782936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.783416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.783460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.783492] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.784033] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.336 [2024-05-15 01:31:01.784225] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.336 [2024-05-15 01:31:01.784235] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.336 [2024-05-15 01:31:01.784244] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.336 [2024-05-15 01:31:01.786961] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.795239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.795909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.796390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.796431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.796463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.797043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.336 [2024-05-15 01:31:01.797216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.336 [2024-05-15 01:31:01.797227] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.336 [2024-05-15 01:31:01.797235] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.336 [2024-05-15 01:31:01.799840] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.807991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.808579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.809047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.809088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.809121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.809332] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.336 [2024-05-15 01:31:01.809501] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.336 [2024-05-15 01:31:01.809512] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.336 [2024-05-15 01:31:01.809521] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.336 [2024-05-15 01:31:01.812142] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.820744] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.821362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.821855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.821895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.821928] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.822158] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.336 [2024-05-15 01:31:01.822330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.336 [2024-05-15 01:31:01.822341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.336 [2024-05-15 01:31:01.822350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.336 [2024-05-15 01:31:01.824944] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.833551] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.834184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.834640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.834681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.834713] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.835181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.336 [2024-05-15 01:31:01.835354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.336 [2024-05-15 01:31:01.835365] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.336 [2024-05-15 01:31:01.835374] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.336 [2024-05-15 01:31:01.837975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.336 [2024-05-15 01:31:01.846320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.336 [2024-05-15 01:31:01.846965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.847455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.336 [2024-05-15 01:31:01.847497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.336 [2024-05-15 01:31:01.847530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.336 [2024-05-15 01:31:01.848125] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.848330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.848341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.848350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.850905] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.859071] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.859726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.859983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.860023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.860055] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.860287] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.860455] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.860465] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.860474] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.863032] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.871883] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.872504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.872872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.872913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.872945] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.873205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.873446] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.873460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.873472] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.877247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.884988] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.885621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.885980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.885992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.886001] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.886169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.886343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.886354] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.886362] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.888919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.897848] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.898478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.898945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.898993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.899025] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.899276] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.899443] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.899453] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.899462] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.902016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.910657] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.911056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.911547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.911589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.911622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.912234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.912621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.912631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.912639] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.915124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.923415] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.924057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.924468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.924510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.924543] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.925092] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.925272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.925283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.925291] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.927771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.936201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.936814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.937274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.937317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.937357] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.937578] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.937745] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.937755] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.937764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.940300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.948980] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.949604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.950015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.950055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.950087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.950370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.950538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.950548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.950557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.953115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.961682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.337 [2024-05-15 01:31:01.962323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.962764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.337 [2024-05-15 01:31:01.962804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.337 [2024-05-15 01:31:01.962836] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.337 [2024-05-15 01:31:01.963343] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.337 [2024-05-15 01:31:01.963511] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.337 [2024-05-15 01:31:01.963521] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.337 [2024-05-15 01:31:01.963530] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.337 [2024-05-15 01:31:01.966091] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.337 [2024-05-15 01:31:01.974431] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.338 [2024-05-15 01:31:01.974811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:01.975251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:01.975265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.338 [2024-05-15 01:31:01.975274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.338 [2024-05-15 01:31:01.975445] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.338 [2024-05-15 01:31:01.975612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.338 [2024-05-15 01:31:01.975623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.338 [2024-05-15 01:31:01.975631] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.338 [2024-05-15 01:31:01.978162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.338 [2024-05-15 01:31:01.987228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.338 [2024-05-15 01:31:01.987847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:01.988219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:01.988260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.338 [2024-05-15 01:31:01.988293] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.338 [2024-05-15 01:31:01.988888] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.338 [2024-05-15 01:31:01.989498] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.338 [2024-05-15 01:31:01.989533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.338 [2024-05-15 01:31:01.989564] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.338 [2024-05-15 01:31:01.992131] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.338 [2024-05-15 01:31:01.999905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.338 [2024-05-15 01:31:02.000529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:02.000998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:02.001038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.338 [2024-05-15 01:31:02.001070] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.338 [2024-05-15 01:31:02.001617] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.338 [2024-05-15 01:31:02.001784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.338 [2024-05-15 01:31:02.001794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.338 [2024-05-15 01:31:02.001803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.338 [2024-05-15 01:31:02.004398] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.338 [2024-05-15 01:31:02.012646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.338 [2024-05-15 01:31:02.013179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:02.013622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.338 [2024-05-15 01:31:02.013663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.338 [2024-05-15 01:31:02.013695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.338 [2024-05-15 01:31:02.014305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.338 [2024-05-15 01:31:02.014583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.338 [2024-05-15 01:31:02.014594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.338 [2024-05-15 01:31:02.014603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.338 [2024-05-15 01:31:02.018209] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.598 [2024-05-15 01:31:02.026320] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.026730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.027098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.027138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.027170] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.027662] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.027835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.027845] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.027854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.030518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.039101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.039763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.040550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.040594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.040627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.040933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.041105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.041115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.041124] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.043823] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.052011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.052655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.053063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.053104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.053135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.053705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.053872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.053886] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.053895] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.056471] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.064796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.065404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.065896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.065936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.065968] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.066577] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.066998] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.067009] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.067018] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.069578] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.077612] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.078261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.078675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.078715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.078747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.079358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.079604] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.079614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.079623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.082175] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.090425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.091045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.091345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.091358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.091368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.091536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.091703] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.091713] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.091725] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.094246] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.103118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.103748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.104163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.104217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.104250] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.104858] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.105026] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.105036] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.105045] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.107606] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.115908] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.116536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.117022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.117062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.117094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.117706] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.118119] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.118130] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.118138] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.120726] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.128601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.129231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.129644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.129684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.129717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.130329] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.599 [2024-05-15 01:31:02.130852] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.599 [2024-05-15 01:31:02.130863] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.599 [2024-05-15 01:31:02.130872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.599 [2024-05-15 01:31:02.133431] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.599 [2024-05-15 01:31:02.141407] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.599 [2024-05-15 01:31:02.142054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.142545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.599 [2024-05-15 01:31:02.142587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.599 [2024-05-15 01:31:02.142620] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.599 [2024-05-15 01:31:02.143226] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.143784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.143794] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.143803] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.146341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.154208] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.154825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.155261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.155303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.155336] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.155838] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.156006] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.156016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.156025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.158562] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.166969] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.167505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.167850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.167863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.167872] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.168039] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.168212] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.168223] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.168232] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.170788] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.179791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.180427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.180901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.180942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.180974] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.181216] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.181384] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.181394] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.181403] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.184039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.192805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.193355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.193709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.193721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.193732] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.193904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.194075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.194086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.194095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.196792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.205702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.206323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.206727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.206739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.206749] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.206921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.207092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.207102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.207112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.209813] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.218741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.219336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.219764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.219776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.219786] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.219957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.220130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.220141] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.220150] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.222848] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.231752] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.232357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.232616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.232628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.232637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.232808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.232981] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.232992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.233001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.235699] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.244766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.245174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.245606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.245618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.245628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.245801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.245973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.245984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.245993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.248688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.257766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.258395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.258802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.258817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.600 [2024-05-15 01:31:02.258827] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.600 [2024-05-15 01:31:02.258999] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.600 [2024-05-15 01:31:02.259172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.600 [2024-05-15 01:31:02.259182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.600 [2024-05-15 01:31:02.259195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.600 [2024-05-15 01:31:02.261885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.600 [2024-05-15 01:31:02.270688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.600 [2024-05-15 01:31:02.271294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.600 [2024-05-15 01:31:02.271645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-05-15 01:31:02.271658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.601 [2024-05-15 01:31:02.271667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.601 [2024-05-15 01:31:02.271839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.601 [2024-05-15 01:31:02.272011] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.601 [2024-05-15 01:31:02.272022] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.601 [2024-05-15 01:31:02.272031] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.601 [2024-05-15 01:31:02.274731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.601 [2024-05-15 01:31:02.283634] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.601 [2024-05-15 01:31:02.284263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-05-15 01:31:02.284618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.601 [2024-05-15 01:31:02.284631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.601 [2024-05-15 01:31:02.284640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.601 [2024-05-15 01:31:02.284812] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.601 [2024-05-15 01:31:02.284984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.601 [2024-05-15 01:31:02.284995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.601 [2024-05-15 01:31:02.285004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.601 [2024-05-15 01:31:02.287710] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.861 [2024-05-15 01:31:02.296633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.861 [2024-05-15 01:31:02.297261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.861 [2024-05-15 01:31:02.297686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.861 [2024-05-15 01:31:02.297726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.861 [2024-05-15 01:31:02.297764] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.861 [2024-05-15 01:31:02.298003] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.861 [2024-05-15 01:31:02.298249] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.861 [2024-05-15 01:31:02.298264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.861 [2024-05-15 01:31:02.298276] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.861 [2024-05-15 01:31:02.302060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.861 [2024-05-15 01:31:02.310026] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.861 [2024-05-15 01:31:02.310620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.311084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.311124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.311156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.311766] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.312380] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.312428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.312437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.315133] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.322898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.323507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.323870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.323882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.323892] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.324063] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.324238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.324251] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.324260] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.326893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.335773] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.336415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.336779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.336791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.336801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.336970] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.337137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.337148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.337157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.339769] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.348512] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.349100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.349495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.349538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.349570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.350148] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.350333] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.350343] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.350352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.352964] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.361254] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.361856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.362273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.362315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.362347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.362943] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.363237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.363247] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.363256] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.365877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.374265] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.374872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.375317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.375360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.375392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.375988] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.376428] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.376439] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.376448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.379074] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.387113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.387771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.388187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.388242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.388275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.388477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.388643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.388654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.388662] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.391241] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.399860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.400488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.400910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.400950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.400983] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.401767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.401935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.401945] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.401954] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.404613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.412672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.413334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.413807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.413847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.413880] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.414370] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.414539] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.414552] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.862 [2024-05-15 01:31:02.414560] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.862 [2024-05-15 01:31:02.417188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.862 [2024-05-15 01:31:02.425494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.862 [2024-05-15 01:31:02.426111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.426601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.862 [2024-05-15 01:31:02.426644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.862 [2024-05-15 01:31:02.426676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.862 [2024-05-15 01:31:02.427284] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.862 [2024-05-15 01:31:02.427465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.862 [2024-05-15 01:31:02.427475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.427484] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.430109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.438350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.438907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.439305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.439349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.439381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.439818] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.440059] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.440073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.440086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.443870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.451966] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.452551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.452916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.452956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.452988] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.453485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.453653] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.453663] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.453675] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.456256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.464674] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.465314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.465785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.465826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.465858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.466468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.466939] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.466949] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.466958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.469558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.477380] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.478014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.478485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.478527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.478559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.479155] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.479728] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.479739] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.479748] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.482386] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.490096] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.490603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.490960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.491000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.491033] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.491640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.492029] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.492040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.492048] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.494624] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.502876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.503482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.503901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.503941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.503973] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.504550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.504718] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.504728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.504737] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.507393] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.515730] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.516358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.516724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.516766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.516775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.516933] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.517092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.517102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.517110] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.519715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.528540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.529227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.529601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.529642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.529674] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.530281] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.530879] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.530894] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.530906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.534691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.863 [2024-05-15 01:31:02.542521] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.863 [2024-05-15 01:31:02.543106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.543511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.863 [2024-05-15 01:31:02.543553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:26.863 [2024-05-15 01:31:02.543585] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:26.863 [2024-05-15 01:31:02.543817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:26.863 [2024-05-15 01:31:02.543985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.863 [2024-05-15 01:31:02.543995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.863 [2024-05-15 01:31:02.544004] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.863 [2024-05-15 01:31:02.546599] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.555367] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.555914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.556404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.556445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.556477] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.557013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.557187] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.557203] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.557212] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.559932] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.568239] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.568834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.569325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.569367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.569399] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.569799] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.569968] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.569978] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.569987] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.572552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.581078] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.581744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.582159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.582213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.582246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.582694] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.582863] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.582873] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.582882] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.585448] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.593876] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.594516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.594958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.594997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.595030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.595218] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.595386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.595396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.595405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.598046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.606601] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.607166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.607670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.607711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.607744] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.608269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.608437] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.608447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.608456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.611012] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.619524] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.620133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.620506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.620547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.620579] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.620923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.621092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.621102] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.621111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.623729] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.632377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.633004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.633427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.633470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.633502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.634099] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.634281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.634291] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.634300] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.636859] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.645285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.645869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.646292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.646336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.646368] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.646862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.647022] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.647032] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.647040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.649670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.124 [2024-05-15 01:31:02.658091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.124 [2024-05-15 01:31:02.658726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.659228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.124 [2024-05-15 01:31:02.659270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.124 [2024-05-15 01:31:02.659311] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.124 [2024-05-15 01:31:02.659907] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.124 [2024-05-15 01:31:02.660128] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.124 [2024-05-15 01:31:02.660139] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.124 [2024-05-15 01:31:02.660147] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.124 [2024-05-15 01:31:02.662866] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.671024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.671575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.672004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.672044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.672077] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.672282] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.672450] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.672460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.672469] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.675090] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.683806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.684447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.684870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.684910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.684942] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.685390] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.685564] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.685574] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.685583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.688144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.696626] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.697257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.697679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.697720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.697752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.698166] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.698337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.698348] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.698357] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.700913] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.709474] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.710114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.710597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.710638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.710671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.711227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.711396] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.711406] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.711415] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.714003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.722341] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.722904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.723310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.723352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.723383] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.723667] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.723907] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.723921] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.723933] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.727714] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.735605] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.736186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.736620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.736659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.736693] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.737265] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.737449] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.737460] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.737468] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.740023] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.748443] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.749082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.749432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.749444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.749453] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.749611] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.749769] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.749778] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.749787] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.752288] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.761130] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.761696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.762150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.762201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.762235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.762687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.762854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.762864] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.762873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.765459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.773884] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.774523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.774947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.774987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.775019] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.125 [2024-05-15 01:31:02.775302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.125 [2024-05-15 01:31:02.775469] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.125 [2024-05-15 01:31:02.775483] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.125 [2024-05-15 01:31:02.775492] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.125 [2024-05-15 01:31:02.778112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.125 [2024-05-15 01:31:02.786739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.125 [2024-05-15 01:31:02.787354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.787730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.125 [2024-05-15 01:31:02.787743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.125 [2024-05-15 01:31:02.787752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.126 [2024-05-15 01:31:02.787924] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.126 [2024-05-15 01:31:02.788096] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.126 [2024-05-15 01:31:02.788107] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.126 [2024-05-15 01:31:02.788116] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.126 [2024-05-15 01:31:02.790695] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.126 [2024-05-15 01:31:02.799646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.126 [2024-05-15 01:31:02.800256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.126 [2024-05-15 01:31:02.800688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.126 [2024-05-15 01:31:02.800727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.126 [2024-05-15 01:31:02.800759] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.126 [2024-05-15 01:31:02.801248] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.126 [2024-05-15 01:31:02.801415] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.126 [2024-05-15 01:31:02.801426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.126 [2024-05-15 01:31:02.801435] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.126 [2024-05-15 01:31:02.804044] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.126 [2024-05-15 01:31:02.812570] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.126 [2024-05-15 01:31:02.813171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.126 [2024-05-15 01:31:02.813519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.126 [2024-05-15 01:31:02.813533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.126 [2024-05-15 01:31:02.813542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.126 [2024-05-15 01:31:02.813714] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.126 [2024-05-15 01:31:02.813885] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.126 [2024-05-15 01:31:02.813896] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.386 [2024-05-15 01:31:02.813910] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.386 [2024-05-15 01:31:02.816613] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.386 [2024-05-15 01:31:02.825520] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.386 [2024-05-15 01:31:02.826168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.386 [2024-05-15 01:31:02.826620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.386 [2024-05-15 01:31:02.826662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.386 [2024-05-15 01:31:02.826694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.386 [2024-05-15 01:31:02.827304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.386 [2024-05-15 01:31:02.827502] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.386 [2024-05-15 01:31:02.827513] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.386 [2024-05-15 01:31:02.827522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.386 [2024-05-15 01:31:02.830079] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.386 [2024-05-15 01:31:02.838323] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.386 [2024-05-15 01:31:02.838990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.386 [2024-05-15 01:31:02.839517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.386 [2024-05-15 01:31:02.839559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.386 [2024-05-15 01:31:02.839591] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.386 [2024-05-15 01:31:02.840186] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.386 [2024-05-15 01:31:02.840687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.386 [2024-05-15 01:31:02.840698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.386 [2024-05-15 01:31:02.840707] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.386 [2024-05-15 01:31:02.843346] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.386 [2024-05-15 01:31:02.851103] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.386 [2024-05-15 01:31:02.851744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.386 [2024-05-15 01:31:02.852234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.386 [2024-05-15 01:31:02.852276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.386 [2024-05-15 01:31:02.852302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.386 [2024-05-15 01:31:02.852473] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.386 [2024-05-15 01:31:02.852632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.386 [2024-05-15 01:31:02.852642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.386 [2024-05-15 01:31:02.852650] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.386 [2024-05-15 01:31:02.855271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.863872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.864513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.864943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.864982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.865015] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.865627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.865803] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.865813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.865821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.869417] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.877513] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.878074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.878552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.878595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.878627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.878868] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.879036] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.879046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.879054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.881626] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.890329] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.890957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.891454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.891496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.891528] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.892122] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.892714] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.892725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.892734] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.895319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.903041] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.903648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.904084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.904124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.904156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.904634] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.904802] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.904812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.904821] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.907377] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.915711] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.916342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.916858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.916897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.916929] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.917539] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.917985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.917996] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.918005] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.920585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.928466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.929053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.929495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.929538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.929570] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.930056] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.930228] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.930239] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.930248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.932799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.941196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.941806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.942226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.942267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.942300] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.942823] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.942990] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.943000] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.943009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.945572] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.953964] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.954563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.955078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.955118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.955150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.955759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.956115] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.956125] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.956134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.958689] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.966697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.967319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.967832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.967872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.967904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.387 [2024-05-15 01:31:02.968515] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.387 [2024-05-15 01:31:02.968946] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.387 [2024-05-15 01:31:02.968956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.387 [2024-05-15 01:31:02.968965] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.387 [2024-05-15 01:31:02.971518] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.387 [2024-05-15 01:31:02.979487] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.387 [2024-05-15 01:31:02.980110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.980617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.387 [2024-05-15 01:31:02.980659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.387 [2024-05-15 01:31:02.980692] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:02.981300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:02.981902] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:02.981930] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:02.981939] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:02.984504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:02.992222] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:02.992849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:02.993335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:02.993377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:02.993409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:02.994005] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:02.994173] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:02.994183] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:02.994195] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:02.996756] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:03.005060] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:03.005677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.006118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.006158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:03.006202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:03.006732] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:03.006973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:03.006987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:03.006999] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:03.010771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:03.018737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:03.019359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.019869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.019909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:03.019949] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:03.020561] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:03.021063] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:03.021073] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:03.021082] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:03.023664] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:03.031430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:03.032034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.032453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.032494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:03.032527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:03.032813] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:03.032982] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:03.032992] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:03.033001] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:03.035555] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:03.044217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:03.044798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.045281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.045294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:03.045304] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:03.045470] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:03.045638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:03.045649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:03.045657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:03.048218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:03.056929] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:03.057538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.057971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.057983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:03.057992] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:03.058163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:03.058334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:03.058344] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:03.058353] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:03.060905] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.388 [2024-05-15 01:31:03.069775] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.388 [2024-05-15 01:31:03.070401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.070914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.388 [2024-05-15 01:31:03.070954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.388 [2024-05-15 01:31:03.070996] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.388 [2024-05-15 01:31:03.071179] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.388 [2024-05-15 01:31:03.071356] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.388 [2024-05-15 01:31:03.071367] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.388 [2024-05-15 01:31:03.071376] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.388 [2024-05-15 01:31:03.074073] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.648 [2024-05-15 01:31:03.082869] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.648 [2024-05-15 01:31:03.083499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-05-15 01:31:03.083964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-05-15 01:31:03.084004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.648 [2024-05-15 01:31:03.084036] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.648 [2024-05-15 01:31:03.084269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.648 [2024-05-15 01:31:03.084438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.648 [2024-05-15 01:31:03.084448] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.648 [2024-05-15 01:31:03.084457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.648 [2024-05-15 01:31:03.087108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.648 [2024-05-15 01:31:03.095658] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.648 [2024-05-15 01:31:03.096239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-05-15 01:31:03.096754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.648 [2024-05-15 01:31:03.096793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.648 [2024-05-15 01:31:03.096826] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.648 [2024-05-15 01:31:03.097412] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.097589] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.097600] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.097609] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.100219] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.108490] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.109123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.109649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.109691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.109723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.110055] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.110227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.110238] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.110247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.112798] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.121246] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.121831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.122251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.122292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.122325] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.122922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.123362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.123373] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.123382] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.125884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.134010] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.134630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.135072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.135112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.135144] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.135756] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.136367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.136381] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.136390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.138943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.146783] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.147375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.147823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.147863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.147895] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.148399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.148639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.148653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.148666] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.152441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.160188] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.160821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.161331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.161373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.161414] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.161580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.161748] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.161758] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.161767] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.164324] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.172926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.173510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.173938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.173979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.174011] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.174294] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.174461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.174471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.174483] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.177043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.185688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.186302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.186734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.186745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.186755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.186922] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.187089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.187099] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.187108] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.189743] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.198413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.199044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.199554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.199598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.199631] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.200172] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.200418] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.200433] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.200445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.204229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.211646] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.212269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.212770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.212783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.212792] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.212964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.213137] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.213148] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.213157] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.215807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.224342] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.224883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.225333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.225346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.225356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.225523] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.225690] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.225701] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.225709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.228271] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.237099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.237739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.238250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.238277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.238287] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.238453] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.238621] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.238631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.238640] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.241196] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 [2024-05-15 01:31:03.249905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.250513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.251023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.251063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.251095] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.251671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.251839] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.649 [2024-05-15 01:31:03.251850] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.649 [2024-05-15 01:31:03.251859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.649 [2024-05-15 01:31:03.254421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 72907 Killed "${NVMF_APP[@]}" "$@" 00:28:27.649 01:31:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:27.649 01:31:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:27.649 01:31:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.649 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:27.649 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.649 [2024-05-15 01:31:03.262811] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.649 [2024-05-15 01:31:03.263437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.263854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.649 [2024-05-15 01:31:03.263894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.649 [2024-05-15 01:31:03.263926] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.649 [2024-05-15 01:31:03.264388] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.649 [2024-05-15 01:31:03.264562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.650 [2024-05-15 01:31:03.264573] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.650 [2024-05-15 01:31:03.264582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.650 [2024-05-15 01:31:03.267291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=74293 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 74293 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 74293 ']' 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:27.650 01:31:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.650 [2024-05-15 01:31:03.275718] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.650 [2024-05-15 01:31:03.276345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.276703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.276744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.650 [2024-05-15 01:31:03.276776] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.650 [2024-05-15 01:31:03.277395] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.650 [2024-05-15 01:31:03.277568] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.650 [2024-05-15 01:31:03.277579] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.650 [2024-05-15 01:31:03.277589] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.650 [2024-05-15 01:31:03.280291] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.650 [2024-05-15 01:31:03.288762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.650 [2024-05-15 01:31:03.289388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.289699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.289713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.650 [2024-05-15 01:31:03.289723] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.650 [2024-05-15 01:31:03.289896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.650 [2024-05-15 01:31:03.290071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.650 [2024-05-15 01:31:03.290081] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.650 [2024-05-15 01:31:03.290090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.650 [2024-05-15 01:31:03.292781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.650 [2024-05-15 01:31:03.301701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.650 [2024-05-15 01:31:03.302117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.302478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.302492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.650 [2024-05-15 01:31:03.302502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.650 [2024-05-15 01:31:03.302670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.650 [2024-05-15 01:31:03.302836] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.650 [2024-05-15 01:31:03.302847] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.650 [2024-05-15 01:31:03.302856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.650 [2024-05-15 01:31:03.305571] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.650 [2024-05-15 01:31:03.314702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.650 [2024-05-15 01:31:03.315134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.315446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.315459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.650 [2024-05-15 01:31:03.315469] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.650 [2024-05-15 01:31:03.315641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.650 [2024-05-15 01:31:03.315813] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.650 [2024-05-15 01:31:03.315824] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.650 [2024-05-15 01:31:03.315832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.650 [2024-05-15 01:31:03.318556] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.650 [2024-05-15 01:31:03.319631] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:27.650 [2024-05-15 01:31:03.319684] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.650 [2024-05-15 01:31:03.327617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.650 [2024-05-15 01:31:03.328156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.328567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.650 [2024-05-15 01:31:03.328580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.650 [2024-05-15 01:31:03.328590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.650 [2024-05-15 01:31:03.328763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.650 [2024-05-15 01:31:03.328935] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.650 [2024-05-15 01:31:03.328946] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.650 [2024-05-15 01:31:03.328955] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.650 [2024-05-15 01:31:03.331650] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.914 [2024-05-15 01:31:03.340590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.914 [2024-05-15 01:31:03.341120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.341553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.341595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.914 [2024-05-15 01:31:03.341627] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.914 [2024-05-15 01:31:03.341833] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.914 [2024-05-15 01:31:03.342007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.914 [2024-05-15 01:31:03.342018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.914 [2024-05-15 01:31:03.342027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.914 [2024-05-15 01:31:03.344730] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.914 [2024-05-15 01:31:03.353515] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.914 [2024-05-15 01:31:03.353905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.354334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.354348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.914 [2024-05-15 01:31:03.354359] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.914 [2024-05-15 01:31:03.354533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.914 [2024-05-15 01:31:03.354705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.914 [2024-05-15 01:31:03.354716] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.914 [2024-05-15 01:31:03.354728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.914 [2024-05-15 01:31:03.357429] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.914 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.914 [2024-05-15 01:31:03.366504] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.914 [2024-05-15 01:31:03.366904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.367045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.367057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.914 [2024-05-15 01:31:03.367067] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.914 [2024-05-15 01:31:03.367245] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.914 [2024-05-15 01:31:03.367417] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.914 [2024-05-15 01:31:03.367428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.914 [2024-05-15 01:31:03.367437] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.914 [2024-05-15 01:31:03.370132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.914 [2024-05-15 01:31:03.379528] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.914 [2024-05-15 01:31:03.380068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.380496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.914 [2024-05-15 01:31:03.380510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.380520] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.380687] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.380855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.380865] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.380874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.383552] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.392475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.393114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.393466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.393479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.393489] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.393661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.393834] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.393844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.393853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.396525] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.397828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:27.915 [2024-05-15 01:31:03.405463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.405995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.406215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.406228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.406238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.406410] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.406583] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.406594] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.406603] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.409298] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.418488] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.419124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.419466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.419479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.419488] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.419661] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.419833] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.419844] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.419853] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.422551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.431457] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.432087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.432514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.432527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.432537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.432709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.432881] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.432892] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.432901] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.435601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.444426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.445075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.445449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.445462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.445473] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.445650] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.445822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.445832] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.445841] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.448484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.457356] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.457959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.458269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.458282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.458292] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.458463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.458636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.458647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.458655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.461313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.469568] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.915 [2024-05-15 01:31:03.469603] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.915 [2024-05-15 01:31:03.469616] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.915 [2024-05-15 01:31:03.469627] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.915 [2024-05-15 01:31:03.469636] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.915 [2024-05-15 01:31:03.469685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.915 [2024-05-15 01:31:03.469792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.915 [2024-05-15 01:31:03.469795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.915 [2024-05-15 01:31:03.470213] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.470830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.471184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.471202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.471217] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.471392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.471565] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.471576] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.471585] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.474287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.483200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.483808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.484024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.484037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.484047] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.484227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.484401] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.484412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.484421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.487112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.496204] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.496736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.496933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.496946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.496956] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.497129] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.497307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.497318] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.497328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.500025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.509252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.509823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.510258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.510271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.510281] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.510459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.510632] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.510642] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.510652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.513350] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.522268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.522834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.523267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.523280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.915 [2024-05-15 01:31:03.523291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.915 [2024-05-15 01:31:03.523465] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.915 [2024-05-15 01:31:03.523638] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.915 [2024-05-15 01:31:03.523649] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.915 [2024-05-15 01:31:03.523659] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.915 [2024-05-15 01:31:03.526355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.915 [2024-05-15 01:31:03.535268] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.915 [2024-05-15 01:31:03.535903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.915 [2024-05-15 01:31:03.536254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.536267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.916 [2024-05-15 01:31:03.536277] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.916 [2024-05-15 01:31:03.536450] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.916 [2024-05-15 01:31:03.536623] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.916 [2024-05-15 01:31:03.536634] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.916 [2024-05-15 01:31:03.536643] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.916 [2024-05-15 01:31:03.539341] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.916 [2024-05-15 01:31:03.548248] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.916 [2024-05-15 01:31:03.548641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.549071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.549083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.916 [2024-05-15 01:31:03.549092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.916 [2024-05-15 01:31:03.549269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.916 [2024-05-15 01:31:03.549444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.916 [2024-05-15 01:31:03.549455] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.916 [2024-05-15 01:31:03.549464] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.916 [2024-05-15 01:31:03.552158] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.916 [2024-05-15 01:31:03.561227] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.916 [2024-05-15 01:31:03.561856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.562210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.562223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.916 [2024-05-15 01:31:03.562233] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.916 [2024-05-15 01:31:03.562404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.916 [2024-05-15 01:31:03.562576] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.916 [2024-05-15 01:31:03.562586] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.916 [2024-05-15 01:31:03.562596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.916 [2024-05-15 01:31:03.565286] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.916 [2024-05-15 01:31:03.574199] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.916 [2024-05-15 01:31:03.574842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.575265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.575278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.916 [2024-05-15 01:31:03.575288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.916 [2024-05-15 01:31:03.575460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.916 [2024-05-15 01:31:03.575637] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.916 [2024-05-15 01:31:03.575648] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.916 [2024-05-15 01:31:03.575657] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.916 [2024-05-15 01:31:03.578357] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.916 [2024-05-15 01:31:03.587099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.916 [2024-05-15 01:31:03.587728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.588165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.588177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.916 [2024-05-15 01:31:03.588186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.916 [2024-05-15 01:31:03.588363] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.916 [2024-05-15 01:31:03.588534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.916 [2024-05-15 01:31:03.588547] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.916 [2024-05-15 01:31:03.588557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.916 [2024-05-15 01:31:03.591253] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.916 [2024-05-15 01:31:03.600143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.916 [2024-05-15 01:31:03.600771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.601134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.916 [2024-05-15 01:31:03.601146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:27.916 [2024-05-15 01:31:03.601156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:27.916 [2024-05-15 01:31:03.601333] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:27.916 [2024-05-15 01:31:03.601505] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.916 [2024-05-15 01:31:03.601515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.916 [2024-05-15 01:31:03.601524] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.176 [2024-05-15 01:31:03.604233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.176 [2024-05-15 01:31:03.613144] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.176 [2024-05-15 01:31:03.613779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.176 [2024-05-15 01:31:03.614203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.614217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.614226] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.614398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.614569] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.614580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.614588] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.617284] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.626202] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.626830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.627129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.627141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.627151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.627328] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.627500] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.627510] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.627522] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.630218] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.639101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.639723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.640151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.640163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.640173] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.640350] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.640523] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.640533] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.640542] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.643247] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.652147] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.652781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.653187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.653203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.653212] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.653384] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.653557] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.653567] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.653576] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.656273] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.665170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.665577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.666006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.666019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.666028] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.666205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.666376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.666387] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.666396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.669091] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.678155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.678785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.679188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.679205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.679215] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.679386] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.679560] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.679570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.679579] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.682272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.691203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.691578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.692008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.692020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.692030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.692205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.692378] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.692388] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.692397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.695092] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.704150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.177 [2024-05-15 01:31:03.704714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.705145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.177 [2024-05-15 01:31:03.705157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.177 [2024-05-15 01:31:03.705167] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.177 [2024-05-15 01:31:03.705342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.177 [2024-05-15 01:31:03.705514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.177 [2024-05-15 01:31:03.705524] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.177 [2024-05-15 01:31:03.705534] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.177 [2024-05-15 01:31:03.708232] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.177 [2024-05-15 01:31:03.717137] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.717746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.717945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.717957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.717967] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.718138] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.718313] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.718324] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.718333] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.721025] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.730099] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.730713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.731138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.731151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.731160] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.731336] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.731508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.731519] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.731528] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.734224] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.743134] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.743768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.744147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.744159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.744168] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.744345] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.744516] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.744527] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.744536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.747231] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.756138] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.756532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.756912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.756924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.756934] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.757105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.757281] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.757292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.757301] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.759989] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.769050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.769662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.770006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.770018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.770029] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.770204] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.770376] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.770386] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.770396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.773088] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.781997] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.782626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.782915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.782927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.782937] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.783108] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.783284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.783295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.783304] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.786001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.794902] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.795533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.795730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.795745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.795755] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.795927] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.796101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.796111] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.796120] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.798819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.807889] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.808431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.808785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.808798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.808807] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.808979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.809152] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.809162] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.809171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.178 [2024-05-15 01:31:03.811870] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.178 [2024-05-15 01:31:03.820826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.178 [2024-05-15 01:31:03.821455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.821762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.178 [2024-05-15 01:31:03.821775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.178 [2024-05-15 01:31:03.821785] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.178 [2024-05-15 01:31:03.821957] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.178 [2024-05-15 01:31:03.822129] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.178 [2024-05-15 01:31:03.822140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.178 [2024-05-15 01:31:03.822149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.179 [2024-05-15 01:31:03.824843] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.179 [2024-05-15 01:31:03.833757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.179 [2024-05-15 01:31:03.834315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.179 [2024-05-15 01:31:03.834619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.179 [2024-05-15 01:31:03.834632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.179 [2024-05-15 01:31:03.834646] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.179 [2024-05-15 01:31:03.834819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.179 [2024-05-15 01:31:03.834992] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.179 [2024-05-15 01:31:03.835002] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.179 [2024-05-15 01:31:03.835011] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.179 [2024-05-15 01:31:03.837713] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.179 [2024-05-15 01:31:03.846774] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.179 [2024-05-15 01:31:03.847306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.179 [2024-05-15 01:31:03.847605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.179 [2024-05-15 01:31:03.847617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.179 [2024-05-15 01:31:03.847628] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.179 [2024-05-15 01:31:03.847801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.179 [2024-05-15 01:31:03.847973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.179 [2024-05-15 01:31:03.847984] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.179 [2024-05-15 01:31:03.847994] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.179 [2024-05-15 01:31:03.850690] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.179 [2024-05-15 01:31:03.859771] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.179 [2024-05-15 01:31:03.860323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.179 [2024-05-15 01:31:03.860680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.179 [2024-05-15 01:31:03.860693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.179 [2024-05-15 01:31:03.860703] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.179 [2024-05-15 01:31:03.860875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.179 [2024-05-15 01:31:03.861046] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.179 [2024-05-15 01:31:03.861057] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.179 [2024-05-15 01:31:03.861066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.179 [2024-05-15 01:31:03.863765] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.439 [2024-05-15 01:31:03.872683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.439 [2024-05-15 01:31:03.873239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.439 [2024-05-15 01:31:03.873646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.439 [2024-05-15 01:31:03.873659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.439 [2024-05-15 01:31:03.873669] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.439 [2024-05-15 01:31:03.873844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.439 [2024-05-15 01:31:03.874018] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.439 [2024-05-15 01:31:03.874029] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.439 [2024-05-15 01:31:03.874038] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.439 [2024-05-15 01:31:03.876738] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.439 [2024-05-15 01:31:03.885663] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.439 [2024-05-15 01:31:03.886286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.439 [2024-05-15 01:31:03.886720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.439 [2024-05-15 01:31:03.886733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.439 [2024-05-15 01:31:03.886743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.439 [2024-05-15 01:31:03.886914] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.439 [2024-05-15 01:31:03.887087] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.439 [2024-05-15 01:31:03.887098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.439 [2024-05-15 01:31:03.887107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.889803] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.898561] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.899184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.899616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.899629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.899638] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.899810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.899983] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.899994] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.900003] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.902696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.911471] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.912010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.912387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.912400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.912410] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.912582] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.912757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.912769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.912778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.915481] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.924385] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.924939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.925283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.925296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.925306] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.925477] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.925650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.925660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.925669] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.928371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.937287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.937893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.938297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.938312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.938322] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.938494] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.938667] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.938678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.938687] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.941391] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.950314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.950848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.951159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.951172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.951181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.951359] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.951530] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.951544] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.951553] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.954252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.963314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.963755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.964107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.964119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.964129] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.964305] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.964478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.964489] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.964498] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.967200] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.976285] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.976884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.977278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.977293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.977302] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.977475] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.977647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.977658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.977667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.980371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:03.989291] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:03.989919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.990294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:03.990309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:03.990319] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:03.990492] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:03.990664] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:03.990674] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:03.990688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:03.993389] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.002311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.002794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.003208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.003222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.003232] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.003404] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.003577] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.003587] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.003596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.006314] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.015224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.015704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.016064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.016077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.016086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.016261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.016434] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.016444] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.016453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.019144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.028212] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.028791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.029101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.029113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.029123] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.029299] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.029471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.029482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.029491] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.032189] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.041107] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.041713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.042065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.042077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.042087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.042264] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.042436] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.042447] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.042457] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.045150] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.054075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.054634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.055042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.055055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.055064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.055241] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.055413] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.055424] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.055433] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.058124] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.067052] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.067678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.068089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.068102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.068112] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.068289] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.068461] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.068471] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.068481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.071183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.079952] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.080568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.080871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.080884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.080894] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.081065] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.440 [2024-05-15 01:31:04.081248] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.440 [2024-05-15 01:31:04.081260] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.440 [2024-05-15 01:31:04.081269] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.440 [2024-05-15 01:31:04.083956] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.440 [2024-05-15 01:31:04.092885] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.440 [2024-05-15 01:31:04.093420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.093779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.440 [2024-05-15 01:31:04.093791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.440 [2024-05-15 01:31:04.093801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.440 [2024-05-15 01:31:04.093972] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.441 [2024-05-15 01:31:04.094145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.441 [2024-05-15 01:31:04.094156] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.441 [2024-05-15 01:31:04.094166] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.441 [2024-05-15 01:31:04.096862] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.441 [2024-05-15 01:31:04.105941] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.441 [2024-05-15 01:31:04.106431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.441 [2024-05-15 01:31:04.106739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.441 [2024-05-15 01:31:04.106752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.441 [2024-05-15 01:31:04.106762] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.441 [2024-05-15 01:31:04.106934] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.441 [2024-05-15 01:31:04.107105] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.441 [2024-05-15 01:31:04.107116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.441 [2024-05-15 01:31:04.107125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.441 [2024-05-15 01:31:04.109825] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.441 [2024-05-15 01:31:04.118893] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.441 [2024-05-15 01:31:04.119349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.441 [2024-05-15 01:31:04.119639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.441 [2024-05-15 01:31:04.119651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.441 [2024-05-15 01:31:04.119661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.441 [2024-05-15 01:31:04.119832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.441 [2024-05-15 01:31:04.120005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.441 [2024-05-15 01:31:04.120015] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.441 [2024-05-15 01:31:04.120024] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.441 [2024-05-15 01:31:04.122720] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.441 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:28.441 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:28:28.441 01:31:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:28.441 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:28.441 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.701 [2024-05-15 01:31:04.131801] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 [2024-05-15 01:31:04.132426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.132786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.132799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.701 [2024-05-15 01:31:04.132809] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.701 [2024-05-15 01:31:04.132981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.701 [2024-05-15 01:31:04.133154] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.701 [2024-05-15 01:31:04.133164] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.701 [2024-05-15 01:31:04.133173] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.701 [2024-05-15 01:31:04.135901] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.701 [2024-05-15 01:31:04.144826] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 [2024-05-15 01:31:04.145405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.145713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.145726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.701 [2024-05-15 01:31:04.145736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.701 [2024-05-15 01:31:04.145908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.701 [2024-05-15 01:31:04.146079] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.701 [2024-05-15 01:31:04.146090] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.701 [2024-05-15 01:31:04.146099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.701 [2024-05-15 01:31:04.148800] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.701 [2024-05-15 01:31:04.157867] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 [2024-05-15 01:31:04.158406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.158766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.158779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.701 [2024-05-15 01:31:04.158789] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.701 [2024-05-15 01:31:04.158961] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.701 [2024-05-15 01:31:04.159134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.701 [2024-05-15 01:31:04.159145] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.701 [2024-05-15 01:31:04.159154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.701 [2024-05-15 01:31:04.161852] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:28.701 [2024-05-15 01:31:04.170768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.701 [2024-05-15 01:31:04.171303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.701 [2024-05-15 01:31:04.171611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.171625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.701 [2024-05-15 01:31:04.171634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.701 [2024-05-15 01:31:04.171806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.701 [2024-05-15 01:31:04.171977] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.701 [2024-05-15 01:31:04.171987] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.701 [2024-05-15 01:31:04.171996] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.701 [2024-05-15 01:31:04.174697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.701 [2024-05-15 01:31:04.176950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.701 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.701 [2024-05-15 01:31:04.183762] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 [2024-05-15 01:31:04.184319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.184673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.184685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.701 [2024-05-15 01:31:04.184698] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.701 [2024-05-15 01:31:04.184870] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.701 [2024-05-15 01:31:04.185041] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.701 [2024-05-15 01:31:04.185052] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.701 [2024-05-15 01:31:04.185062] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.701 [2024-05-15 01:31:04.187766] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.701 [2024-05-15 01:31:04.196698] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 [2024-05-15 01:31:04.197182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.197593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.197606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.701 [2024-05-15 01:31:04.197616] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.701 [2024-05-15 01:31:04.197788] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.701 [2024-05-15 01:31:04.197961] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.701 [2024-05-15 01:31:04.197972] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.701 [2024-05-15 01:31:04.197981] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.701 [2024-05-15 01:31:04.200678] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.701 [2024-05-15 01:31:04.209610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.701 [2024-05-15 01:31:04.210196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.701 [2024-05-15 01:31:04.210606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.702 [2024-05-15 01:31:04.210619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.702 [2024-05-15 01:31:04.210629] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.702 [2024-05-15 01:31:04.210801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.702 [2024-05-15 01:31:04.210973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.702 [2024-05-15 01:31:04.210983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.702 [2024-05-15 01:31:04.210993] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.702 [2024-05-15 01:31:04.213693] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.702 Malloc0 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.702 [2024-05-15 01:31:04.222631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.702 [2024-05-15 01:31:04.223165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.702 [2024-05-15 01:31:04.223599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.702 [2024-05-15 01:31:04.223615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.702 [2024-05-15 01:31:04.223625] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.702 [2024-05-15 01:31:04.223798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.702 [2024-05-15 01:31:04.223970] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.702 [2024-05-15 01:31:04.223981] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.702 [2024-05-15 01:31:04.223990] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.702 [2024-05-15 01:31:04.226691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.702 [2024-05-15 01:31:04.235607] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.702 [2024-05-15 01:31:04.236086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.702 [2024-05-15 01:31:04.236438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:28.702 [2024-05-15 01:31:04.236452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19169f0 with addr=10.0.0.2, port=4420 00:28:28.702 [2024-05-15 01:31:04.236462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19169f0 is same with the state(5) to be set 00:28:28.702 [2024-05-15 01:31:04.236636] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19169f0 (9): Bad file descriptor 00:28:28.702 [2024-05-15 01:31:04.236809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:28.702 [2024-05-15 01:31:04.236820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:28.702 [2024-05-15 01:31:04.236829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:28.702 [2024-05-15 01:31:04.239529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:28.702 [2024-05-15 01:31:04.240807] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:28.702 [2024-05-15 01:31:04.241051] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.702 01:31:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 73230 00:28:28.702 [2024-05-15 01:31:04.248615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.702 [2024-05-15 01:31:04.281728] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:38.678 00:28:38.678 Latency(us) 00:28:38.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.678 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:38.678 Verification LBA range: start 0x0 length 0x4000 00:28:38.678 Nvme1n1 : 15.01 8727.75 34.09 12279.55 0.00 6073.57 1035.47 22754.10 00:28:38.678 =================================================================================================================== 00:28:38.678 Total : 8727.75 34.09 12279.55 0.00 6073.57 1035.47 22754.10 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:38.678 rmmod nvme_tcp 00:28:38.678 rmmod nvme_fabrics 00:28:38.678 rmmod nvme_keyring 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 74293 ']' 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 74293 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 74293 ']' 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 74293 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:38.678 01:31:12 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74293 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74293' 00:28:38.678 killing process with pid 74293 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 74293 00:28:38.678 [2024-05-15 01:31:13.011168] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 74293 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:38.678 01:31:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.057 01:31:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:40.057 00:28:40.057 real 0m27.291s 00:28:40.057 user 1m1.986s 00:28:40.057 sys 0m8.040s 00:28:40.057 01:31:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:40.057 01:31:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:40.057 ************************************ 00:28:40.057 END TEST nvmf_bdevperf 00:28:40.057 ************************************ 00:28:40.057 01:31:15 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:40.057 01:31:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:28:40.057 01:31:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:40.057 01:31:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:40.057 ************************************ 00:28:40.057 START TEST nvmf_target_disconnect 00:28:40.057 ************************************ 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:40.057 * Looking for test storage... 00:28:40.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:40.057 01:31:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:46.625 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:46.625 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:46.625 Found net devices under 0000:af:00.0: cvl_0_0 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:46.625 Found net devices under 0000:af:00.1: cvl_0_1 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.625 01:31:21 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:46.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:46.626 00:28:46.626 --- 10.0.0.2 ping statistics --- 00:28:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.626 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:28:46.626 00:28:46.626 --- 10.0.0.1 ping statistics --- 00:28:46.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.626 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.626 01:31:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:46.910 ************************************ 00:28:46.910 START TEST nvmf_target_disconnect_tc1 00:28:46.910 ************************************ 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:46.910 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.910 [2024-05-15 01:31:22.457258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.910 [2024-05-15 01:31:22.457727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.910 [2024-05-15 01:31:22.457741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x4fa4b0 with addr=10.0.0.2, port=4420 00:28:46.910 [2024-05-15 01:31:22.457762] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:46.910 [2024-05-15 01:31:22.457773] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:46.910 [2024-05-15 01:31:22.457781] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:46.910 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:46.910 Initializing NVMe Controllers 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:28:46.910 00:28:46.910 real 0m0.106s 00:28:46.910 user 0m0.035s 00:28:46.910 sys 0m0.071s 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:46.910 ************************************ 00:28:46.910 END TEST nvmf_target_disconnect_tc1 00:28:46.910 ************************************ 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:46.910 ************************************ 00:28:46.910 START TEST nvmf_target_disconnect_tc2 00:28:46.910 ************************************ 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=79621 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 79621 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 79621 ']' 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.910 01:31:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:47.174 [2024-05-15 01:31:22.604410] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:47.174 [2024-05-15 01:31:22.604453] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.174 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.174 [2024-05-15 01:31:22.673791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.174 [2024-05-15 01:31:22.746667] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.174 [2024-05-15 01:31:22.746700] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.174 [2024-05-15 01:31:22.746709] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.174 [2024-05-15 01:31:22.746717] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.174 [2024-05-15 01:31:22.746740] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.174 [2024-05-15 01:31:22.746861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:47.174 [2024-05-15 01:31:22.746992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:47.174 [2024-05-15 01:31:22.747099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.174 [2024-05-15 01:31:22.747100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:47.741 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:47.741 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:47.741 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.741 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.742 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 Malloc0 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 [2024-05-15 01:31:23.467776] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 [2024-05-15 01:31:23.495801] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:48.005 [2024-05-15 01:31:23.496038] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=79813 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:28:48.005 01:31:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:48.005 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.913 01:31:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 79621 00:28:49.913 01:31:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Write completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Write completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Write completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Read completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Write completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Write completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.913 Write completed with error (sct=0, sc=8) 00:28:49.913 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 [2024-05-15 01:31:25.523741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 [2024-05-15 01:31:25.523970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 [2024-05-15 01:31:25.524197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Write completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 Read completed with error (sct=0, sc=8) 00:28:49.914 starting I/O failed 00:28:49.914 [2024-05-15 01:31:25.524411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:49.914 [2024-05-15 01:31:25.524832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.914 [2024-05-15 01:31:25.525247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.914 [2024-05-15 01:31:25.525260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.914 qpair failed and we were unable to recover it. 00:28:49.914 [2024-05-15 01:31:25.525594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.525960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.526000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.526424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.526858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.526896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.527385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.527887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.527925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.528339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.528761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.528801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.529268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.529681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.529720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.530138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.530553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.530594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.530936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.531285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.531297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.531655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.532070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.532109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.532541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.532952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.532964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.533395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.533793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.533805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.534183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.534478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.534494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.534929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.535414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.535453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.535801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.536212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.536252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.536645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.537100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.537139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.537638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.537974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.538012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.538502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.538911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.538950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.539412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.539881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.539919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.540320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.540732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.540770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.541219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.541682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.541722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.542120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.542465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.542505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.542866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.543341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.543379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.543767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.544028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.544067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.544549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.544948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.544987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.545477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.545957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.545996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.546349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.546815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.546853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.547270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.547664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.547702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.548179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.548611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.548649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.549041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.549441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.549480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.549895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.550375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.550415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.550832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.551182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.915 [2024-05-15 01:31:25.551202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.915 qpair failed and we were unable to recover it. 00:28:49.915 [2024-05-15 01:31:25.551640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.552089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.552127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.552554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.552812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.552828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.553258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.553604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.553642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.554124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.554605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.554651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.555057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.555539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.555596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.555981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.556394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.556433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.556917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.557422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.557462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.557875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.558301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.558340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.558818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.559223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.559263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.559680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.560155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.560201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.560588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.560966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.560982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.561369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.561798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.561814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.562184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.562592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.562631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.562993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.563473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.563513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.564023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.564522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.564572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.564921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.565328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.565357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.565771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.566152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.566200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.566625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.567025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.567064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.567546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.568043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.568082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.568441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.568881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.568897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.569200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.569640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.569656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.570073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.570412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.570451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.570951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.571420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.571460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.571864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.572339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.572379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.572816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.573310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.573350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.573810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.574209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.574249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.574694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.575097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.575113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.575328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.575689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.575727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.576214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.576688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.576727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.577119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.577592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.916 [2024-05-15 01:31:25.577632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.916 qpair failed and we were unable to recover it. 00:28:49.916 [2024-05-15 01:31:25.577851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.578250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.578289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.578787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.579261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.579300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.579712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.580184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.580248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.580630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.580892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.580908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.581284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.581722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.581761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.582201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.582603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.582642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.583122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.583601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.583641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.584070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.584451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.584467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.584880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.585331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.585371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.585831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.586146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.586185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.586676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.587154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.587200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.587691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.588171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.588232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.588486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.588936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.588975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.589462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.589942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.589981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.590394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.590849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.590890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.591386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.591889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.591927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.592251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.592705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.592743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.593169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.593604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.593643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.594102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.594552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.594592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.595026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.595505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.595544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.596048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.596525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.596560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.596969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.597307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.597324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.597754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.598114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.598152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.598646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.598977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.599016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.599423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.599874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.599893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.600236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.600616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.600654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:49.917 [2024-05-15 01:31:25.601138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.601580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.917 [2024-05-15 01:31:25.601619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:49.917 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.601977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.602323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.602339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.602745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.603048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.603087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.603522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.603972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.604022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.604277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.604622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.604660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.605142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.605552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.605591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.605999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.606482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.606522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.607005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.607455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.607495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.607885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.608346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.608385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.608823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.609293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.609332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.609740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.610207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.610223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.610612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.610966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.610982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.611335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.611678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.611694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.612078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.612533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.612572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.612979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.613453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.613492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.613962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.614424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.183 [2024-05-15 01:31:25.614440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.183 qpair failed and we were unable to recover it. 00:28:50.183 [2024-05-15 01:31:25.614824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.615280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.615319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.615805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.616208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.616248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.616512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.616976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.617014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.617534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.618026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.618042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.618494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.618848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.618886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.619368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.619839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.619877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.620336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.620684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.620722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.621182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.621608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.621647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.622130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.622554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.622593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.622910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.623253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.623292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.623778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.624213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.624253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.624737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.625137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.625153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.625563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.625861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.625877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.626317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.626767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.626806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.627049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.627438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.627477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.627960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.628359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.628399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.628808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.629258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.629298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.629803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.630202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.630241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.630498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.630943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.630981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.631444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.631916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.631954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.632360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.632834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.632872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.633211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.633684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.633724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.634078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.634460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.634500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.634930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.635387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.635426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.635816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.636289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.636329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.636811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.637033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.637049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.637405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.637838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.637877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.638330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.638786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.638825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.639283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.639695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.639733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.184 [2024-05-15 01:31:25.640163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.640579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.184 [2024-05-15 01:31:25.640617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.184 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.641099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.641479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.641518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.641978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.642439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.642478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.642960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.643369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.643408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.643839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.644318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.644364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.644849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.645295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.645335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.645819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.646241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.646280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.646760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.647111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.647150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.647360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.647835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.647874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.648353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.648827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.648865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.649344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.649763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.649801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.650211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.650594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.650632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.650846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.651311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.651350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.651832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.652223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.652240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.652598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.652982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.653026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.653433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.653910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.653949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.654367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.654842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.654880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.655349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.655731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.655770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.656181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.656530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.656569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.656984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.657343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.657382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.657864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.658243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.658282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.658715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.659105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.659143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.659616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.660091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.660130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.660597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.661072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.661111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.661546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.662020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.662059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.662558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.662983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.663022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.663433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.663882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.663920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.664289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.664681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.664719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.665110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.665560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.665599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.666079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.666497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.666536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.185 qpair failed and we were unable to recover it. 00:28:50.185 [2024-05-15 01:31:25.666976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.667406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.185 [2024-05-15 01:31:25.667445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.667902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.668282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.668298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.668672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.668919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.668958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.669388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.669878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.669923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.670352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.670721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.670737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.671178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.671691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.671729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.672141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.672635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.672675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.673084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.673326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.673365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.673824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.674301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.674340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.674796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.675268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.675307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.675793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.676243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.676282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.676764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.677142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.677181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.677608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.678104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.678143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.678637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.679063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.679102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.679565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.680019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.680057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.680460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.680942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.680980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.681442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.681894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.681932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.682391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.682842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.682881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.683300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.683656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.683696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.684165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.684647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.684686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.685160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.685567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.685606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.686023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.686426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.686465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.686731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.687220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.687259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.687696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.688156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.688201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.688710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.689189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.689237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.689504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.689975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.690019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.690524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.690944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.690986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.691377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.691858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.691896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.692390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.692845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.692883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.693288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.693765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.186 [2024-05-15 01:31:25.693802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.186 qpair failed and we were unable to recover it. 00:28:50.186 [2024-05-15 01:31:25.694293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.694792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.694831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.695314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.695718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.695757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.696172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.696643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.696683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.697112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.697496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.697535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.698017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.698487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.698526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.698931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.699407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.699452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.699857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.700310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.700349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.700811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.701418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.701460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.701952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.702413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.702453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.702917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.703301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.703341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.703800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.704281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.704321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.704781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.705261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.705300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.705640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.706064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.706103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.706506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.706858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.706874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.707211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.707562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.707600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.708060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.708519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.708535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.708836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.709221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.709261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.709748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.710203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.710242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.710702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.711120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.711158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.711581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.712042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.712080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.712562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.713015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.713057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.713489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.713901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.713939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.714346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.714794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.714832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.715315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.715717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.715755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.187 [2024-05-15 01:31:25.716241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.716573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.187 [2024-05-15 01:31:25.716589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.187 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.717050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.717524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.717563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.718030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.718481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.718520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.719003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.719456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.719495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.719882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.720263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.720303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.720693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.721146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.721183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.721618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.721966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.721982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.722425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.722877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.722915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.723391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.723866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.723904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.724309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.724685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.724724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.725112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.725555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.725571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.725998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.726373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.726389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.726755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.727090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.727129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.727515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.727939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.727977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.728424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.728825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.728864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.729317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.729525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.729541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.729735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.730091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.730107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.730515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.730928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.730967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.731431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.731781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.731798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.732135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.732580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.732619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.733106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.733551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.733591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.734001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.734470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.734486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.734900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.735303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.735342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.735747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.736139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.736178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.736596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.737050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.737089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.737484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.737875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.737914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.738394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.738868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.738907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.739266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.739733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.739771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.740181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.740618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.740657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.741139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.741640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.741657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.188 qpair failed and we were unable to recover it. 00:28:50.188 [2024-05-15 01:31:25.742090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.742521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.188 [2024-05-15 01:31:25.742560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.742963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.743390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.743429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.743912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.744171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.744223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.744637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.745112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.745150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.745550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.746020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.746058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.746523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.746881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.746919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.747305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.747737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.747775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.748253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.748748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.748786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.749246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.749641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.749680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.750166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.750617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.750634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.751000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.751430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.751470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.751905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.752286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.752304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.752614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.752999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.753015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.753333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.753709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.753726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.754102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.754464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.754480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.754841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.755100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.755138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.755573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.756004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.756043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.756504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.756955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.756993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.757421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.757820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.757859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.758335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.758771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.758810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.759296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.759793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.759831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.760244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.760738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.760777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.761238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.761728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.761767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.762234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.762636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.762675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.763152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.763636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.763675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.764185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.764667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.764706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.765189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.765625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.765663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.765931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.766258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.766297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.766778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.767233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.767272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.767751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.767937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.767975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.189 qpair failed and we were unable to recover it. 00:28:50.189 [2024-05-15 01:31:25.768460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.768870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.189 [2024-05-15 01:31:25.768908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.769314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.769695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.769734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.770170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.770574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.770590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.770805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.771110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.771149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.771667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.772067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.772106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.772572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.773024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.773062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.773541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.773818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.773856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.774199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.774631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.774670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.775154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.775616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.775633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.776095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.776435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.776477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.776825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.777211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.777250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.777616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.778040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.778078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.778486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.778854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.778869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.779124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.779602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.779642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.780031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.780483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.780522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.780986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.781424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.781462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.781922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.782341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.782381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.782785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.783188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.783235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.783625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.783952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.783990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.784449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.784926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.784965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.785380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.785795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.785834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.786310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.786755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.786771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.787180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.787605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.787644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.788127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.788607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.788653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.789128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.789485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.789524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.789930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.790331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.790347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.790723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.791086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.791102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.791465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.791941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.791980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.792301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.792774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.792812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.793223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.793614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.793653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.190 qpair failed and we were unable to recover it. 00:28:50.190 [2024-05-15 01:31:25.793920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.794321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.190 [2024-05-15 01:31:25.794337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.794791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.795184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.795231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.795714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.796165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.796211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.796602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.797079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.797124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.797533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.797939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.797955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.798416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.798763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.798801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.799236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.799614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.799652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.800112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.800539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.800578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.801050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.801541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.801557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.801922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.802327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.802344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.802645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.803059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.803097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.803577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.803922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.803960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.804368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.804847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.804885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.805348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.805799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.805838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.806325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.806832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.806870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.807350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.807706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.807745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.808189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.808562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.808600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.809069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.809540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.809580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.809988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.810374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.810413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.810813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.811183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.811229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.811714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.812113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.812152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.812647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.813124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.813162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.813632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.814075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.814091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.814452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.814856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.814895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.815408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.815798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.815836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.816299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.816701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.816739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.817134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.817542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.817582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.818066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.818544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.818584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.818991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.819388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.819428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.819823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.820217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.820257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.191 qpair failed and we were unable to recover it. 00:28:50.191 [2024-05-15 01:31:25.820685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.191 [2024-05-15 01:31:25.821052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.821090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.821579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.822033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.822071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.822542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.823010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.823048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.823391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.823869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.823907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.824311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.824650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.824689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.825149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.825553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.825592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.826049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.826383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.826400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.826808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.827242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.827281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.827735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.828097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.828113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.828500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.828922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.828960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.829314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.829825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.829863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.830344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.830824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.830862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.831301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.831542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.831585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.832016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.832447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.832488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.832975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.833447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.833491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.833834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.834261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.834301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.834781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.835232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.835271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.835719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.836109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.836147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.836511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.836922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.836961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.837447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.837902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.837940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.838467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.838883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.838899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.839283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.839669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.839707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.840121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.840697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.840739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.841225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.841606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.841645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.842055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.842387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.842431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.192 [2024-05-15 01:31:25.842917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.843368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.192 [2024-05-15 01:31:25.843407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.192 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.843815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.844248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.844287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.844749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.845240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.845279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.845699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.846103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.846149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.846490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.846918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.846934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.847215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.847651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.847689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.848174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.848548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.848594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.849025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.849378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.849417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.849806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.850188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.850250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.850707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.851161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.851209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.851464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.851887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.851925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.852354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.852695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.852733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.853224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.853619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.853657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.854141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.854582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.854621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.855008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.855396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.855435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.855891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.856348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.856388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.856876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.857296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.857334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.857795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.858273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.858311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.858718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.859200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.859239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.859721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.860173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.860221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.860573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.861060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.861098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.861610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.862087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.862135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.862550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.862944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.862982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.863458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.863883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.863899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.864331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.864786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.864824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.865300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.865660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.865676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.866033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.866339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.866355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.866789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.867185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.193 [2024-05-15 01:31:25.867230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.193 qpair failed and we were unable to recover it. 00:28:50.193 [2024-05-15 01:31:25.867707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.868121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.868137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.868544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.868822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.868838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.869248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.869685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.869724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.870212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.870704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.870720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.871132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.871409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.871447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.871927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.872403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.872442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.872853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.873114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.873152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.873651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.874036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.874074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.874534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.874857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.874873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.875089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.875515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.875554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.876025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.876492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.876532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.877017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.877468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.877524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.878012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.878437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.878484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.878853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.879348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.879387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.879839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.880221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.880260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.880648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.881125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.881164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.460 [2024-05-15 01:31:25.881645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.882142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.460 [2024-05-15 01:31:25.882181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.460 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.882604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.882998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.883036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.883515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.883937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.883976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.884406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.884832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.884848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.885278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.885656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.885695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.886175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.886673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.886712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.887144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.887558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.887580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.888006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.888362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.888401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.888861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.889338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.889354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.889781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.890208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.890247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.890640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.891034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.891074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.891554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.892006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.892045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.892503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.892954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.892992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.893390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.893729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.893768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.894250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.894721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.894760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.895175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.895633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.895672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.896145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.896538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.896577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.897011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.897438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.897477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.897877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.898351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.898390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.898883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.899262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.899302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.899702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.900177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.900226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.900609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.901080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.901118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.901632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.902109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.902147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.902572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.903038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.903054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.903391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.903754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.903793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.904277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.904729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.904767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.905254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.905729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.905767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.906283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.906759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.906797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.907285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.907744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.907782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.908267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.908739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.908778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.909181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.909660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.909699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.910185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.910602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.910639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.911029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.911446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.911485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.911968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.912377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.912416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.461 qpair failed and we were unable to recover it. 00:28:50.461 [2024-05-15 01:31:25.912825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.461 [2024-05-15 01:31:25.913215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.913254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.913580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.913876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.913892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.914041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.914467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.914504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.914971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.915372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.915412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.915804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.916160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.916177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.916625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.917075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.917113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.917637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.917963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.917979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.918365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.918763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.918801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.919261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.919734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.919772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.920205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.920609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.920649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.921084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.921495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.921535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.921933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.922383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.922422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.922906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.923358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.923398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.923824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.924281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.924320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.924707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.925179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.925227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.925642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.926092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.926130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.926405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.926719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.926757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.927182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.927608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.927646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.928129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.928630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.928670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.929137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.929643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.929681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.930076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.930550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.930590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.931017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.931446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.931463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.931828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.932224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.932263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.932680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.933154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.933205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.933687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.934166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.934227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.934583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.935054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.935093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.935509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.935915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.935954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.936436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.936836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.936874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.937290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.937761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.937800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.938286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.938690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.938729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.939214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.939689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.939727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.940221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.940633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.940671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.941081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.941488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.941527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.941928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.942339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.462 [2024-05-15 01:31:25.942379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.462 qpair failed and we were unable to recover it. 00:28:50.462 [2024-05-15 01:31:25.942799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.943271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.943318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.943531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.943885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.943923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.944320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.944724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.944762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.945202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.945624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.945663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.946130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.946516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.946555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.947032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.947389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.947406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.947787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.948212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.948252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.948646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.949096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.949134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.949551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.950028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.950065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.950548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.951048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.951087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.951509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.951989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.952027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.952361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.952736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.952752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.953187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.953644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.953682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.954031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.954385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.954401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.954837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.955286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.955326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.955731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.956199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.956238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.956733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.957073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.957112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.957528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.957975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.957991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.958405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.958788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.958827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.959076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.959561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.959601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.959875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.960345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.960384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.960709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.961168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.961215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.961686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.962039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.962078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.962559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.963015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.963031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.963484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.963897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.963936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.964398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.964868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.964906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.965386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.965788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.965804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.966142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.966539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.966578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.967036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.967287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.967327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.967808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.968259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.968298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.968710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.969186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.969233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.969718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.970216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.970255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.463 [2024-05-15 01:31:25.970742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.971225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.463 [2024-05-15 01:31:25.971265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.463 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.971725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.971965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.972004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.972408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.972864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.972903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.973113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.973601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.973640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.974124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.974511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.974550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.974987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.975420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.975459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.975941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.976444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.976484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.976898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.977365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.977404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.977883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.978356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.978402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.978856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.979334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.979374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.979788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.980265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.980305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.980766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.981085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.981101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.981490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.981906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.981945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.982383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.982792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.982830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.983310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.983763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.983802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.984284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.984703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.984741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.985205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.985630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.985669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.986070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.986538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.986577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.987080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.987531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.987576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.988036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.988454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.988494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.988883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.989070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.989108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.989592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.990019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.990057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.990540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.990942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.990959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.991296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.991740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.991779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.992096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.992526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.992542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.992922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.993268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.993284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.993728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.994179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.994226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.994743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.995215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.995255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.995719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.996115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.996154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.464 qpair failed and we were unable to recover it. 00:28:50.464 [2024-05-15 01:31:25.996654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.464 [2024-05-15 01:31:25.997061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.997100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:25.997531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.997992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.998031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:25.998466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.998857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.998873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:25.999300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.999712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:25.999750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.000207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.000595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.000633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.001114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.001571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.001588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.001963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.002097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.002113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.002483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.002916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.002954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.003410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.003809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.003848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.004330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.004804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.004842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.005282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.005734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.005772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.006231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.006703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.006741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.007204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.007679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.007717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.008208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.008548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.008586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.009051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.009430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.009470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.009861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.010208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.010247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.010708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.011107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.011145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.011570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.011979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.012017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.012437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.012904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.012943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.013426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.013816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.013833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.014195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.014612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.014651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.015149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.015636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.015676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.016101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.016568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.016607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.017028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.017435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.017451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.017897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.018301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.018340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.018673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.019120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.019136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.019489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.019934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.019972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.020433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.020829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.020846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.021210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.021676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.021714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.022175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.022370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.465 [2024-05-15 01:31:26.022409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.465 qpair failed and we were unable to recover it. 00:28:50.465 [2024-05-15 01:31:26.022798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.023288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.023329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.023815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.024312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.024351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.024737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.025119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.025157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.025584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.026064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.026102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.026495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.026894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.026933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.027325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.027798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.027836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.028234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.028711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.028749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.029069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.029422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.029438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.029803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.030206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.030222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.030629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.030936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.030952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.031327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.031822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.031879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.032307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.032668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.032684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.033142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.033597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.033636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.033980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.034358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.034398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.034741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.035150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.035189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.035680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.036110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.036149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.036590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.037064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.037103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.037615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.037961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.037977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.038284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.038781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.038819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.039228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.039635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.039683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.040119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.040537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.040576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.040947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.041378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.041417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.041807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.042286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.042327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.042733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.043178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.043197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.043608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.044037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.044053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.044460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.044880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.044919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.045382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.045835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.045873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.046262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.046741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.046779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.047169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.047459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.047498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.047885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.048226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.048265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.048746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.049147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.049186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.049533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.050000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.050016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.050429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.050809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.050848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.051333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.051782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.051821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.466 [2024-05-15 01:31:26.052215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.052667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.466 [2024-05-15 01:31:26.052705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.466 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.053062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.053542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.053558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.053972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.054449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.054488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.054890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.055343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.055383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.055771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.056210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.056250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.056733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.057213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.057252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.057664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.057996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.058012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.058450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.058699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.058737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.059148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.059555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.059595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.060053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.060524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.060563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.060994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.061447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.061487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.062023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.062327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.062343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.062685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.063119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.063157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.063667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.064032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.064070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.064556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.064894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.064933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.065278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.065708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.065746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.066148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.066604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.066645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.066918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.067338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.067354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.067707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.068109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.068125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.068273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.068638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.068676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.069109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.069567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.069607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.070011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.070337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.070376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.070791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.071172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.071219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.071649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.072097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.072135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.072478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.072859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.072898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.073318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.073728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.073767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.074143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.074552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.074591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.075065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.075490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.075536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.075965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.076370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.076386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.076762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.077173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.077219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.077578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.077979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.077994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.078364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.078740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.078778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.079189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.079654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.079692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.080160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.080497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.080536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.467 qpair failed and we were unable to recover it. 00:28:50.467 [2024-05-15 01:31:26.080952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.467 [2024-05-15 01:31:26.081348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.081387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.081779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.082231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.082270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.082752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.083150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.083189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.083474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.083884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.083923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.084413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.084888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.084927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.085428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.085845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.085884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.086068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.086552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.086591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.087051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.087400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.087440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.087900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.088272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.088289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.088636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.089042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.089080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.089565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.089994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.090033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.090517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.090975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.091014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.091476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.091960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.091999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.092388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.092839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.092878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.093364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.093840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.093878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.094305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.094649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.094688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.095078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.095540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.095556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.095965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.096374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.096413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.096817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.097296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.097336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.097750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.098221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.098260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.098746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.099141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.099179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.099604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.100026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.100064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.100521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.100998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.101037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.101517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.101895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.101934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.102413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.102820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.102836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.103278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.103675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.103713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.104032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.104438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.104456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.468 qpair failed and we were unable to recover it. 00:28:50.468 [2024-05-15 01:31:26.104872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.468 [2024-05-15 01:31:26.105269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.105308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.105658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.106120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.106158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.106652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.107049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.107087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.107476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.107907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.107946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.108352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.108562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.108600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.109065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.109447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.109486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.109871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.110341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.110358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.110729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.111111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.111127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.111552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.111953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.111991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.112471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.112939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.112977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.113384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.113804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.113842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.114272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.114721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.114759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.115242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.115744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.115782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.116239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.116589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.116605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.117038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.117423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.117462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.117937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.118387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.118426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.118610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.119088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.119127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.119544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.120015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.120059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.120542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.120730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.120768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.121248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.121582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.121620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.122035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.122228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.122266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.122691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.123080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.123096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.123488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.123962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.124001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.124423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.124900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.124938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.125396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.125818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.125856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.126269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.126647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.126685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.127146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.127625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.127664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.128093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.128552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.128590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.129057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.129467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.129506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.129863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.130288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.130327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.469 [2024-05-15 01:31:26.130780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.131250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.469 [2024-05-15 01:31:26.131290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.469 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.131700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.132097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.132136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.132551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.132949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.132987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.133372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.133846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.133884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.134293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.134764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.134802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.135301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.135798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.135836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.136269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.136747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.136785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.137052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.137501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.137540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.137797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.138249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.138288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.138770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.139165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.139224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.139705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.140061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.140077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.140499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.140971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.141009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.141464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.141922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.141961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.142341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.142721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.142760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.143104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.143582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.143621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.470 qpair failed and we were unable to recover it. 00:28:50.470 [2024-05-15 01:31:26.144098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.470 [2024-05-15 01:31:26.144517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.144534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.144941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.145363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.145380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.145718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.146071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.146087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.146519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.146685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.146701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.147054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.147403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.147420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.147774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.148166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.148216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.148627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.149076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.149114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.149583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.149927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.149966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.150447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.150854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.150892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.151375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.151846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.151862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.152236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.152634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.152672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.153015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.153439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.737 [2024-05-15 01:31:26.153455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.737 qpair failed and we were unable to recover it. 00:28:50.737 [2024-05-15 01:31:26.153902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.154351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.154390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.154915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.155342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.155382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.155850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.156300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.156339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.156752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.157085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.157101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.157538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.157964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.158007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.158418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.158878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.158917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.159409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.159813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.159851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.160312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.160788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.160826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.161311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.161733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.161771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.162232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.162705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.162743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.163231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.163701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.163717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.164098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.164475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.164494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.164929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.165311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.165327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.165766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.166116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.166154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.166585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.167035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.167074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.167564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.167964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.168002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.168486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.168904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.168942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.169349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.169757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.169796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.170279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.170479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.170518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.170922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.171394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.171411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.171774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.172183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.172231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.172698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.173170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.173222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.173705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.173963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.174002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.174347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.174768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.174807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.175146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.175554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.175593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.175991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.176377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.176393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.176808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.177207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.177245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.177636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.178131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.178169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.178654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.179049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.179087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.738 [2024-05-15 01:31:26.179572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.179928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.738 [2024-05-15 01:31:26.179944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.738 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.180359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.180756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.180795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.181210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.181688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.181727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.182218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.182711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.182750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.183237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.183632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.183670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.184154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.184534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.184550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.184894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.185309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.185348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.185815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.186268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.186307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.186722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.187216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.187261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.187564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.187974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.188013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.188494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.188837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.188875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.189355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.189823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.189862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.190314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.190535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.190574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.190974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.191438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.191478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.191963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.192208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.192224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.192635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.193029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.193068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.193461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.193961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.193999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.194503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.194804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.194842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.195303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.195703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.195741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.196174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.196640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.196679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.197140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.197560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.197599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.198085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.198585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.198624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.199111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.199582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.199622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.200101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.200560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.200577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.201030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.201208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.201247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.201730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.202220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.202259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.202720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.203211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.203251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.739 qpair failed and we were unable to recover it. 00:28:50.739 [2024-05-15 01:31:26.203758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.739 [2024-05-15 01:31:26.204225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.204264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.204672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.205079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.205117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.205536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.206011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.206049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.206403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.206789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.206828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.207231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.207611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.207649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.208042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.208461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.208500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.208982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.209383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.209422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.209912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.210322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.210361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.210725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.211159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.211216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.211658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.212105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.212143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.212635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.213036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.213074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.213503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.213980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.214023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.214474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.214950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.214988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.215492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.215972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.216010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.216436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.216863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.216900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.217380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.217725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.217741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.218156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.218617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.218662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.219122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.219380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.219419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.219825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.220272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.220288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.220599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.221007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.221046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.221527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.221870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.221908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.222368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.222747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.222785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.223170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.223676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.223715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.223966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.224460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.224500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.224936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.225164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.225180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.225609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.226005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.226043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.226444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.226894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.226932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.227329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.227727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.227766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.228109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.228459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.228499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.229001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.229200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.740 [2024-05-15 01:31:26.229239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.740 qpair failed and we were unable to recover it. 00:28:50.740 [2024-05-15 01:31:26.229650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.229995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.230034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.230508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.230963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.231001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.231465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.231844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.231883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.232297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.232795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.232834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.233267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.233742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.233780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.234167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.234429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.234467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.234865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.235221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.235260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.235626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.236072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.236111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.236539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.236959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.236975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.237412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.237651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.237690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.238179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.238521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.238560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.238882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.239319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.239358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.239791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.240187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.240236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.240693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.241172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.241220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.241705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.242183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.242232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.242641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.243115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.243154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.243590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.244017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.244055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.244445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.244842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.244880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.245361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.245602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.245640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.246098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.246425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.246442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.246793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.247203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.247220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.247594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.248005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.248044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.248503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.248980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.249019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.249478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.249820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.249858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.250273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.250598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.250634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.251050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.251401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.251418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.251853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.252317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.252334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.252749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.253212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.253251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.253643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.254040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.254077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.741 qpair failed and we were unable to recover it. 00:28:50.741 [2024-05-15 01:31:26.254479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.254809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.741 [2024-05-15 01:31:26.254825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.255262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.255733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.255771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.256237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.256533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.256549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.256895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.257350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.257389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.257871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.258250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.258289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.258698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.259084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.259122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.259604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.260055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.260092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.260578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.261057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.261095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.261545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.261996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.262040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.262525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.262950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.262988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.263383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.263765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.263804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.264277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.264731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.264771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.265231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.265428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.265466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.265733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.266185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.266247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.266742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.267149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.267187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.267623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.268082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.268120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.268510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.268964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.269002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.269397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.269870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.269908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.270328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.270803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.270841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.271283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.271705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.271743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.272158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.272659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.272698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.273112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.273517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.273557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.274017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.274332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.274372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.274750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.274995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.275034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.742 qpair failed and we were unable to recover it. 00:28:50.742 [2024-05-15 01:31:26.275494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.742 [2024-05-15 01:31:26.275759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.275775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.276209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.276630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.276669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.277094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.277557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.277596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.278006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.278339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.278378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.278636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.278891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.278929] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.279343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.279800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.279816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.280233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.280582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.280597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.280960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.281297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.281314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.281750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.282164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.282213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.282622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.282984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.283023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.283221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.283694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.283733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.284201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.284661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.284698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.285179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.285633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.285649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.286007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.286358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.286374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.286743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.287176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.287225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.287657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.288133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.288172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.288597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.288834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.288850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.289233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.289587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.289603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.289956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.290309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.290325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.290520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.290950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.290989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.291397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.291899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.291937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.292282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.292682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.292720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.293209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.293611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.293627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.294034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.294460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.294477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.294865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.295250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.295290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.295748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.296185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.296205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.296642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.296930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.296969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.297374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.297707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.297745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.298066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.298541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.298580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.743 qpair failed and we were unable to recover it. 00:28:50.743 [2024-05-15 01:31:26.298986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.299391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.743 [2024-05-15 01:31:26.299431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.299890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.300231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.300271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.300544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.300786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.300803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.301159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.301548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.301587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.303060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.303415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.303435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.303739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.304176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.304229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.304539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.304841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.304860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.305218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.305413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.305429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.305785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.306189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.306211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.306606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.306952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.306991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.307258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.307662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.307702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.308116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.308610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.308650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.309136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.309632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.309671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.310132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.310591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.310630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.311095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.311438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.311479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.311979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.312436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.312477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.312832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.313177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.313232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.313605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.314035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.314051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.314462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.314825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.314864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.315303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.315744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.315761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.316170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.316576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.316592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.317021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.317296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.317313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.317651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.318094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.318132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.318556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.319008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.319047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.319507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.319916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.319955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.320386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.320646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.320685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.320985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.321402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.321442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.321781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.322133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.322149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.322310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.322652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.322690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.323081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.323416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.323455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.744 qpair failed and we were unable to recover it. 00:28:50.744 [2024-05-15 01:31:26.323858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.324332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.744 [2024-05-15 01:31:26.324371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.324725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.325138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.325178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.325535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.325963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.325981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.326361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.326688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.326726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.327240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.327476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.327516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.327911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.328298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.328337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.328772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.329958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.329989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.330347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.331642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.331694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.331907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.332339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.332356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.332614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.332958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.332996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.333417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.333720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.333736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.334170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.334467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.334506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.334922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.335273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.335314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.335715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.336121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.336159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.336600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.336998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.337037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.337514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.337857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.337896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.338306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.338779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.338818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.339157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.339579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.339618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.340306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.340720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.340737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.341170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.341580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.341596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.342001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.342382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.342399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.342706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.342903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.342919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.343333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.343656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.343700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.344182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.344619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.344658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.345117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.345568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.345608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.345838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.346252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.346292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.346694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.347144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.347183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.347560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.347991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.348030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.348419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.348866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.348882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.349026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.349452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.349505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.745 qpair failed and we were unable to recover it. 00:28:50.745 [2024-05-15 01:31:26.349722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.745 [2024-05-15 01:31:26.350208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.350247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.350656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.351064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.351103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.351589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.351922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.351971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.352393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.352868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.352907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.353257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.353575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.353590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.354018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.354462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.354478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.354886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.355226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.355242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.355437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.355774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.355792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.356170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.356654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.356693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.357090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.357436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.357452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.357887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.358225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.358264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.358722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.358858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.358874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.359333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.359721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.359760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.360221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.360624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.360663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.361088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.361563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.361609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.361989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.362321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.362360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.362767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.363241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.363280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.363747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.364144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.364160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.364520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.364999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.365038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.365437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.365908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.365947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.366276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.366772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.366810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.367189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.367613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.367652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.368137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.368571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.368610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.369011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.369370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.369386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.369830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.370282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.370321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.746 [2024-05-15 01:31:26.370716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.371173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.746 [2024-05-15 01:31:26.371232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.746 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.371703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.372023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.372062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.372530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.373036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.373074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.373543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.373948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.373987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.374478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.374874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.374913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.375253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.375636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.375676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.376159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.376616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.376657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.377116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.377582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.377621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.378005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.378479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.378518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.378917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.379390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.379429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.379922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.380373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.380412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.380818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.381268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.381308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.381711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.382162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.382207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.382700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.383077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.383116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.383559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.384083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.384104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.384541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.384975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.384992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.385399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.385751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.385767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.386201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.386576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.386592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.387035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.387365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.387405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.387784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.388128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.388167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.388663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.389059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.389098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.389488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.389827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.389865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.390328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.390716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.390732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.391088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.391531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.391572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.392053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.392400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.392439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.392826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.393273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.393314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.393776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.394249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.394289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.394768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.395240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.395280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.395788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.396186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.396235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.396573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.396903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.396941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.747 qpair failed and we were unable to recover it. 00:28:50.747 [2024-05-15 01:31:26.397425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.397909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.747 [2024-05-15 01:31:26.397948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.398409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.398881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.398898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.399348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.399753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.399788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.400251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.400638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.400682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.401149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.401368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.401408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.401795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.402131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.402147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.402525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.402939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.402978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.403451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.403790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.403828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.404185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.404543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.404592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.405056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.405459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.405498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.405953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.406205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.406222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.406463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.406722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.406760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.407168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.407635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.407674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.408059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.408538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.408577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.409035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.409398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.409437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.409877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.410353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.410392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.410796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.411186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.411246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.411726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.412196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.412213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.412366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.412671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.412710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.413125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.413536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.413575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.413979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.414428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.414468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.414885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.415360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.415398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.415852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.416284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.416300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.416655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.417007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.417024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.417424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.417811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.417850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.418272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.418677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.418715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.418908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.419345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.419362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.419801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.420253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.420293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.420701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.421157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.748 [2024-05-15 01:31:26.421173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:50.748 qpair failed and we were unable to recover it. 00:28:50.748 [2024-05-15 01:31:26.421462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.421898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.421915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.422325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.422691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.422707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.423117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.423468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.423484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.423853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.424237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.424277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.424738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.425211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.425251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.425654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.426077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.426115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.426465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.426706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.426744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.015 qpair failed and we were unable to recover it. 00:28:51.015 [2024-05-15 01:31:26.427150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.427569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.015 [2024-05-15 01:31:26.427609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.427929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.428361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.428378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.428746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.429180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.429227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.429640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.430045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.430083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.430522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.430916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.430932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.431295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.431707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.431745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.432163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.432354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.432393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.432880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.433201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.433218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.433517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.433922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.433961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.434375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.434854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.434893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.435356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.435734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.435782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.436052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.436436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.436475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.436879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.437246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.437263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.437671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.437969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.437986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.438425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.438830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.438869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.439236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.439567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.439607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.440013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.440404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.440444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.440930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.441355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.441394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.441811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.442159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.442178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.442595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.443001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.443017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.443391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.443810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.443849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.444263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.444717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.444756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.445252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.445756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.445794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.446128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.446582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.446598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.446973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.447365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.447404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.447813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.448221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.448261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.448660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.449061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.449099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.449587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.449924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.449963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.450312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.450690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.450728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.451102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.451552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.451592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.016 qpair failed and we were unable to recover it. 00:28:51.016 [2024-05-15 01:31:26.452052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.016 [2024-05-15 01:31:26.452450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.452490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.452975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.453384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.453423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.453807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.454181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.454228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.454636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.454877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.454916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.455337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.455676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.455715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.456159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.456575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.456614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.456941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.457339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.457378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.457711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.458167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.458216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.458567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.459017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.459056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.459447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.459897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.459937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.460344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.460671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.460711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.461148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.461584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.461624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.461993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.462470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.462509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.462923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.463307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.463346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.463763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.464103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.464142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.464558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.465005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.465022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.465373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.465798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.465814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.466171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.466530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.466569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.467030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.467436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.467476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.467970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.468326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.468365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.468771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.469085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.469101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.469405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.469810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.469826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.470241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.470637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.470675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.471153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.471628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.471672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.472091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.472540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.472557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.472868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.473330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.473369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.473856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.474252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.474292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.474775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.475244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.475284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.475685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.476062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.476100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.476294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.476635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.017 [2024-05-15 01:31:26.476673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.017 qpair failed and we were unable to recover it. 00:28:51.017 [2024-05-15 01:31:26.477179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.477670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.477708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.478211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.478606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.478645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.479040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.479458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.479497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.479845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.480249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.480289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.480750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.481226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.481267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.481729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.482133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.482173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.482644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.482974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.482990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.483340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.483693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.483733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.484219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.484566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.484605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.484967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.485337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.485356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.485715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.486029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.486068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.486550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.486738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.486777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.487238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.487592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.487630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.488058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.488455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.488472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.488837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.489289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.489329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.489740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.490132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.490171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.490668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.491061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.491100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.491442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.491798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.491814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.492182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.492597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.492636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.493119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.493521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.493566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.493906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.494313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.494331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.494704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.495031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.495070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.495459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.495861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.495900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.496277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.496682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.496698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.497081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.497416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.497456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.497947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.498420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.498460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.498790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.499263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.499302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.499711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.500170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.500217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.500626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.500972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.501011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.501423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.501597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.018 [2024-05-15 01:31:26.501636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.018 qpair failed and we were unable to recover it. 00:28:51.018 [2024-05-15 01:31:26.502053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.502420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.502459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.502791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.503032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.503070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.503554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.503933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.503972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.504383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.504768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.504808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.505139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.505650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.505690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.506104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.506504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.506544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.507021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.507438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.507477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.507936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.508339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.508355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.508728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.509057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.509096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.509485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.509887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.509925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.510277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.510673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.510711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.511134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.511553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.511593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.511948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.512244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.512260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.512549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.512908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.512947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.513339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.513678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.513717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.514173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.514711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.514751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.515216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.515600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.515638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.516097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.516559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.516598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.516952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.517378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.517417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.517820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.518223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.518262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.518748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.519161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.519211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.519627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.520040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.520078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.520473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.520896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.520935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.521399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.521814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.521853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.522267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.522719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.522758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.523131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.523485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.523525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.523984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.524435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.524451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.524804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.525175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.525196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.525549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.525975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.525991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.019 [2024-05-15 01:31:26.526291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.526597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.019 [2024-05-15 01:31:26.526614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.019 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.526991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.527401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.527417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.527726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.528041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.528056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.528395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.528807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.528823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.529253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.529626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.529642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.530052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.530458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.530475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.530766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.531124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.531140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.531513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.531938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.531955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.532261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.532620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.532636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.532990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.533276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.533293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.533589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.533998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.534014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.534355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.534710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.534729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.535073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.535476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.535492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.535765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.536067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.536083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.536446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.536806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.536822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.537213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.537565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.537581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.537862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.538214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.538231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.538673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.539017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.539033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.539351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.539713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.539729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.540140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.540569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.540585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.540801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.541151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.541167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.541555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.541931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.541947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.542290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.542557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.542573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.542905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.543238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.543254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.020 [2024-05-15 01:31:26.543596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.543935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.020 [2024-05-15 01:31:26.543951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.020 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.544321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.544661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.544677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.545036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.545464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.545480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.545837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.546200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.546216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.546558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.546984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.547000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.547429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.547839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.547855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.548217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.548654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.548670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.549007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.549348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.549364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.549739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.550164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.550181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.550617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.550958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.550974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.551318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.551755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.551771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.552205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.552558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.552574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.552872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.553232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.553248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.553598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.553886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.553902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.554246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.554591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.554607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.554943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.555302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.555318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.555627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.556061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.556077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.556393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.556606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.556622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.556978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.557402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.557418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.557760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.558196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.558213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.558576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.558938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.558954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.559319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.559657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.559673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.560082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.560430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.560447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.560837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.561281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.561297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.561707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.562061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.562077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.562526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.562953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.562968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.563399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.563763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.563779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.564175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.564589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.564605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.564956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.565315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.565331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.565694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.566050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.566066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.021 [2024-05-15 01:31:26.566518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.566878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.021 [2024-05-15 01:31:26.566894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.021 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.567250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.567676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.567693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.568070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.568475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.568491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.568901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.569256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.569272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.569708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.570122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.570138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.570566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.570845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.570861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.571271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.571677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.571694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.571978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.572343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.572359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.572789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.573221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.573240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.573666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.574020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.574036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.574332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.574779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.574795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.575162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.575520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.575538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.575879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.576237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.576253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.576543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.576968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.576984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.577412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.577817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.577834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.578123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.578526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.578542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.578895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.579166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.579182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.579533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.579959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.579976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.580381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.580722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.580738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.581157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.581521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.581537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.581947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.582350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.582367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.582775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.583138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.583154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.583603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.583964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.583980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.584392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.584821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.584838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.585131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.585548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.585565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.585863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.586289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.586306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.586741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.587206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.587222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.587674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.588050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.588067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.588422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.588825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.588841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.589129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.589556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.589573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.022 qpair failed and we were unable to recover it. 00:28:51.022 [2024-05-15 01:31:26.590005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.022 [2024-05-15 01:31:26.590450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.590466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.590793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.591139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.591156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.591585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.591956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.591972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.592355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.592778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.592795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.593229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.593591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.593628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.594107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.594569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.594610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.595079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.595477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.595517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.595942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.596395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.596412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.596830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.597304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.597343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.597818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.598318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.598357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.598851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.599318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.599334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.599781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.600214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.600254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.600738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.601233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.601273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.601691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.602173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.602222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.602730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.603227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.603266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.603772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.604176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.604226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.604639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.605046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.605085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.605595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.606080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.606118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.606540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.606939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.606979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.607444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.607898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.607914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.608282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.608644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.608660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.609091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.609586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.609627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.610058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.610544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.610584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.610996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.611481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.611521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.612016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.612510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.612549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.613038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.613519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.613560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.613998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.614471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.614510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.614919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.615385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.615424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.615911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.616385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.616425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.616796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.617270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.617315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.023 qpair failed and we were unable to recover it. 00:28:51.023 [2024-05-15 01:31:26.617703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.023 [2024-05-15 01:31:26.618213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.618255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.618743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.619091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.619130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.619555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.619980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.620019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.620408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.620886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.620925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.621340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.621800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.621839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.622305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.622733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.622772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.623189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.623618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.623658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.624148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.624556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.624595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.625011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.625487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.625527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.625964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.626346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.626391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.626896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.627348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.627388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.627808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.628285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.628324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.628751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.629214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.629254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.629667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.630160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.630218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.630662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.631101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.631140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.631637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.632069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.632108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.632536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.633042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.633081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.633504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.633936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.633976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.634385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.634840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.634879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.635370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.635790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.635806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.636239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.636697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.636714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.637120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.637618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.637658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.638186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.638660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.638677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.638984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.639437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.639455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.639889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.640314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.640331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.024 [2024-05-15 01:31:26.640748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.641152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.024 [2024-05-15 01:31:26.641202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.024 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.641617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.642082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.642121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.642568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.642991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.643030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.643463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.643920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.643959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.644451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.644853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.644893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.645377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.645827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.645866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.646340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.646723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.646763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.647225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.647708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.647747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.648240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.648724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.648764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.649211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.649619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.649658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.650143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.650555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.650595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.651031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.651437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.651477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.651849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.652316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.652357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.652846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.653348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.653387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.653900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.654261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.654301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.654670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.655091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.655130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.655622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.655980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.656020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.656492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.656972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.657012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.657546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.658004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.658043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.658511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.658995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.659034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.659579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.660093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.660131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.660600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.660951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.660991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.661332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.661694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.661732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.662158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.662589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.662606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.662969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.663394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.663434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.663899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.664392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.664433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.664932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.665303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.665343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.665812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.666349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.666389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.666822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.667246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.667286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.667654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.668082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.668121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.668562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.669091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.669139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.669615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.670030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.670069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.025 qpair failed and we were unable to recover it. 00:28:51.025 [2024-05-15 01:31:26.670493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.670981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.025 [2024-05-15 01:31:26.671019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.671458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.671904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.671942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.672398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.672787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.672804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.673249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.673735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.673779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.674273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.674683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.674721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.675228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.675691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.675731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.676230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.676604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.676643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.677093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.677506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.677546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.678036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.678562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.678601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.679022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.679441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.679481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.679840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.680276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.680316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.680789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.681214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.681262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.681616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.682040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.682056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.682443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.682877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.682917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.683344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.683839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.683856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.684273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.684648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.684687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.685212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.685556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.685572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.685991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.686342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.686381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.686782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.687232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.687271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.687735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.688219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.688259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.688797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.689226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.689267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.689687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.690110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.690149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.690657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.691086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.691124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.691608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.692041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.692081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.692578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.693039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.693078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.693551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.693903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.693941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.694429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.694830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.694847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.695213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.695652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.695669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.696116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.696544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.696585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.697018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.697521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.697561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.697965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.698402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.698420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.698818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.699306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.026 [2024-05-15 01:31:26.699346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.026 qpair failed and we were unable to recover it. 00:28:51.026 [2024-05-15 01:31:26.699755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.700143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.700160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.700618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.700935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.700952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.701399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.701768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.701785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.702234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.702700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.702739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.703178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.703652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.703691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.704115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.704566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.704606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.705023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.705522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.705563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.705882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.706275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.706314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.706882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.707338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.707377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.707788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.708248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.708296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.708692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.709216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.709277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.709683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.710108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.710146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.710584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.710949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.710988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.711415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.711901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.711939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.712397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.712857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.712895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.713397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.713907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.713924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.714288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.714686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.714725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.715211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.715626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.715665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.716072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.716522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.294 [2024-05-15 01:31:26.716562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.294 qpair failed and we were unable to recover it. 00:28:51.294 [2024-05-15 01:31:26.716990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.717430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.717448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.717861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.718357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.718397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.718841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.719304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.719343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.719841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.720350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.720397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.720866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.721312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.721352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.721774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.722260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.722300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.722748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.723255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.723295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.723698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.724238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.724278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.724695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.725165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.725215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.725714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.726253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.726292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.726729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.727220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.727259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.727733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.728231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.728272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.728768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.729240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.729280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.729714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.730166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.730217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.730724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.731228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.731267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.731744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.732099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.732137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.732608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.733141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.733180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.733694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.734110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.734149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.734581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.734949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.734989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.735411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.735829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.735846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.736227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.736596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.736643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.737117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.737588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.737628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.738085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.738557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.738597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.739092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.739575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.739616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.740162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.740546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.740585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.741006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.741409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.741449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.741900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.742386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.742426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.742897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.743269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.743311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.743811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.744274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.744315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.295 [2024-05-15 01:31:26.744822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.745311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.295 [2024-05-15 01:31:26.745352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.295 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.745781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.746275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.746315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.746792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.747260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.747301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.747804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.748213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.748254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.748704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.749134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.749151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.749653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.750170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.750220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.750710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.751205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.751246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.751691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.752136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.752174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.752746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.753231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.753272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.753726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.754229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.754269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.754780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.755246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.755286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.755786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.756237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.756276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.756781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.757188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.757241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.757759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.758270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.758310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.758830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.759317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.759356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.759801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.760268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.760286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.760679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.761189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.761242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.761767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.762172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.762224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.762701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.763122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.763160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.763693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.764131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.764171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.764708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.765216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.765256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.765701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.766128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.766168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.766682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.767169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.767223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.767679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.768185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.768248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.768686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.769208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.769250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.769751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.770217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.770263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.770759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.771162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.771213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.771591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.772081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.772099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.772531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.773008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.773047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.773531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.773968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.774008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.296 [2024-05-15 01:31:26.774429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.774837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.296 [2024-05-15 01:31:26.774876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.296 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.775373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.775754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.775799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.776244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.776603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.776642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.778407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.778836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.778858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.779305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.779701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.779719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.780122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.780560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.780601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.781102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.781613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.781631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.781956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.782399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.782417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.782814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.783243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.783285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.783679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.784058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.784098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.784498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.785033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.785072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.785589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.785950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.785968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.786341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.786665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.786681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.787087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.787518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.787559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.788058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.788484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.788525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.789012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.789423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.789467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.789786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.790176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.790248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.790728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.791262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.791304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.791744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.792173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.792197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.792615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.792984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.793022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.793524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.793933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.793972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.794454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.794892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.794931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.795406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.795775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.795827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.796255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.796663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.796703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.797131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.797601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.797641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.798135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.798511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.798551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.799083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.799617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.799657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.800034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.800519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.800558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.801061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.801483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.801501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.801869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.802237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.802255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.297 qpair failed and we were unable to recover it. 00:28:51.297 [2024-05-15 01:31:26.802676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.297 [2024-05-15 01:31:26.803139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.803178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.803655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.804157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.804208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.804730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.805093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.805134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.805628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.805998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.806037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.806464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.806825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.806865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.807299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.807666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.807705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.808214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.808639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.808679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.809056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.810498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.810536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.811005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.811401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.811419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.811742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.812224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.812264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.812700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.813134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.813152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.813489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.813872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.813911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.814281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.814725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.814764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.815293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.815668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.815708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.816221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.816547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.816565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.817040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.817458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.817476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.817793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.818256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.818305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.818767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.819217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.819257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.819636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.820077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.820116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.820537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.820961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.821001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.821547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.821984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.822001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.822425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.822846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.822863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.823312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.823639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.823685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.824119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.824588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.824628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.298 qpair failed and we were unable to recover it. 00:28:51.298 [2024-05-15 01:31:26.824998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.825489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.298 [2024-05-15 01:31:26.825531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.825984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.826454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.826494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.826933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.827424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.827470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.827998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.828517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.828557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.828915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.829405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.829423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.829737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.830108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.830147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.830661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.831127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.831165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.831711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.832131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.832170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.832611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.833102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.833143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.833622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.834043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.834082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.834515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.835032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.835072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.835589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.836052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.836093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.836492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.836860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.836899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.837336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.837786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.837825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.838324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.838735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.838774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.839254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.839744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.839783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.840312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.840756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.840795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.841279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.841695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.841743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.842185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.842572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.842613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.843117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.843546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.843587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.844030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.844406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.844424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.844813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.845248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.845289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.845783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.846204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.846243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.846670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.847119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.847158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.847671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.848090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.848108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.848518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.848941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.848980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.849425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.849848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.849887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.850391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.850839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.850879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.851342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.851835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.851874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.852313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.852751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.852791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.299 qpair failed and we were unable to recover it. 00:28:51.299 [2024-05-15 01:31:26.853310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.299 [2024-05-15 01:31:26.853800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.853840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.854338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.854776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.854815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.855252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.855719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.855757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.856199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.856601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.856647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.857100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.857600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.857642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.858009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.858388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.858428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.858866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.859375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.859414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.859915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.860360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.860400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.860923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.861350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.861390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.861883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.862336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.862377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.862806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.863216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.863234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.863623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.864148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.864186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.864698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.865125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.865164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.865665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.866159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.866211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.866701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.867112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.867150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.867613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.868042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.868082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.868519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.868934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.868951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.869316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.869678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.869696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.870098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.870517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.870558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.871059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.871499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.871539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.872060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.872503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.872544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.872971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.873442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.873482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.873861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.874293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.874334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.874780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.875266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.875286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.875682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.876175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.876227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.876721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.877151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.877204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.877649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.878124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.878163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.878664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.879176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.879230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.879639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.880109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.880148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.880672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.881096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.881136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.300 qpair failed and we were unable to recover it. 00:28:51.300 [2024-05-15 01:31:26.881639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.300 [2024-05-15 01:31:26.882117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.882157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.882680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.883112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.883151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.883608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.884100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.884140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.884634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.885020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.885037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.885412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.885842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.885881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.886387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.886795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.886834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.887265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.887685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.887725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.888136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.888620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.888660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.889125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.889567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.889609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.890121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.890641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.890682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.891233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.891649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.891691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.892210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.892728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.892777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.893169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.893634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.893674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.894111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.894545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.894585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.895022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.895490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.895530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.896060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.896556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.896597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.897055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.897506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.897547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.897992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.898483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.898523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.899050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.899502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.899542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.900020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.900537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.900576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.901119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.901565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.901604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.902084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.902566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.902584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.902925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.903269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.903286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.903656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.904096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.904135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.904651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.905098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.905138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.905615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.905983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.906022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.906516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.906936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.906976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.907473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.907883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.907923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.908402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.908858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.908897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.909348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.909815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.909855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.301 qpair failed and we were unable to recover it. 00:28:51.301 [2024-05-15 01:31:26.910279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.301 [2024-05-15 01:31:26.910767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.910806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.911262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.911732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.911771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.912223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.912642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.912681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.913215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.913640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.913680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.914176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.914612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.914652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.915093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.915524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.915541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.915932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.916398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.916438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.916938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.917348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.917407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.917829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.918307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.918347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.918905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.919310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.919350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.919776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.920268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.920286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.920713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.921236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.921277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.921706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.922159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.922209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.922688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.923137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.923176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.923640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.924146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.924205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.924727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.925159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.925217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.925648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.926099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.926146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.926532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.926933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.926973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.927501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.927999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.928038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.928460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.928830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.928869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.929294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.929783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.929822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.930352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.930823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.930862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.931339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.931759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.931799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.932218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.932632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.932671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.933130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.933585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.933626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.934106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.934531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.934571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.935080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.935506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.935545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.936069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.936521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.936561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.937073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.937526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.937566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.937992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.938400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.938440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.302 qpair failed and we were unable to recover it. 00:28:51.302 [2024-05-15 01:31:26.938940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.939409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.302 [2024-05-15 01:31:26.939448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.939876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.940280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.940297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.940705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.941232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.941272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.941701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.942180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.942246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.942610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.943023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.943063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.943545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.944061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.944100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.944602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.945120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.945158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.945661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.946111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.946151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.946655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.947076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.947116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.947606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.948094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.948133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.948569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.949046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.949085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.949504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.949943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.949982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.950394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.950875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.950915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.951359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.951847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.951886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.952400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.952816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.952856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.953367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.953789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.953829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.954319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.954693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.954732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.955215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.955668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.955707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.956236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.956687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.956726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.957212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.957717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.957756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.958211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.958566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.958605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.959100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.959564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.959604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.960060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.960515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.960534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.960929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.961392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.961432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.961935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.962446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.303 [2024-05-15 01:31:26.962486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.303 qpair failed and we were unable to recover it. 00:28:51.303 [2024-05-15 01:31:26.962996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.963522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.963563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.964028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.964499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.964539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.964914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.965379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.965419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.965900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.966364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.966382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.966820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.967223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.967263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.967776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.968311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.968352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.968835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.969325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.969365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.969745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.970146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.970185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.970689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.971162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.971212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.971661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.972149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.972188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.972681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.973096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.973143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.973595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.974006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.974044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.974493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.974960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.974995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.975343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.975764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.975782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.976141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.976569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.976609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.976992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.977393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.304 [2024-05-15 01:31:26.977409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.304 qpair failed and we were unable to recover it. 00:28:51.304 [2024-05-15 01:31:26.977786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.978230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.978248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.978667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.979101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.979118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.979546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.979910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.979926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.980382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.980847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.980887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.981369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.981693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.981738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.982165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.982589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.982630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.983046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.983480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.983521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.983964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.984458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.984498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.984945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.985343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.985362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.570 qpair failed and we were unable to recover it. 00:28:51.570 [2024-05-15 01:31:26.985725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.570 [2024-05-15 01:31:26.986146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.986163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.986626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.987091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.987130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.987548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.987929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.987969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.988471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.988843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.988882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.989259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.989655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.989694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.990230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.990573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.990624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.991148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.991700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.991742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.992275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.992786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.992825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.993281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.993647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.993686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.994236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.994678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.994716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.995159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.995549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.995589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.996019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.996548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.996588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.997037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.997503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.997544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.998018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.998458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.998476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.998853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.999226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:26.999266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:26.999719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.000155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.000205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.000688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.001179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.001224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.001648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.002164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.002182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.002589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.003004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.003043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.003473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.003897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.003936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.004440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.004786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.004825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.005300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.005677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.005695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.006156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.006541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.006581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.007092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.007602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.007643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.008025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.008503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.008543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.008976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.009460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.009501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.010039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.010542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.010584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.011136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.011614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.011654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.012156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.012637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.012677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.571 [2024-05-15 01:31:27.013145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.013623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.571 [2024-05-15 01:31:27.013663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.571 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.014116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.014550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.014591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.014962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.015370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.015411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.015838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.016228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.016268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.016692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.017179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.017240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.017601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.017985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.018024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.018538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.019008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.019048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.019546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.019975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.020014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.020432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.020851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.020890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.021390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.021714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.021732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.022156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.022653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.022693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.023119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.023592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.023632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.024070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.024471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.024490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.024858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.025162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.025225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.025656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.026109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.026147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.026604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.027056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.027099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.027554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.027971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.027988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.028470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.028894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.028939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.029340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.029673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.029714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.030177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.030611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.030650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.031100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.031565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.031584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.031991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.032415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.032456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.032814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.033321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.033362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.033785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.034274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.034315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.035208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.035669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.035687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.036098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.036474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.036491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.036822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.037134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.037151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.037583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.037955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.037972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.038337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.038634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.038651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.039076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.039422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.039473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.572 [2024-05-15 01:31:27.039923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.040331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.572 [2024-05-15 01:31:27.040349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.572 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.040780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.041311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.041352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.041845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.042286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.042303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.042750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.043158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.043206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.043695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.044123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.044162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.044606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.044998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.045037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.045520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.045939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.045979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.046478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.046928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.046979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.047413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.047905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.047944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.048366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.048785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.048823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.049235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.049625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.049665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.050089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.050576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.050616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.051042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.051529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.051569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.052008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.052480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.052519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.052948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.053354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.053371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.053724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.054167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.054183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.054568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.054984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.055001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.055447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.055869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.055909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.056337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.056811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.056828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.057294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.057744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.057782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.058286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.058775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.058814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.059298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.059689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.059729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.060172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.060683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.060722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.061252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.061743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.061782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.062236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.062703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.062742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.063243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.063650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.063689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.064137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.064579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.064620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.065102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.065508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.065549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.066037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.066514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.066555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.066973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.067424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.067464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.573 qpair failed and we were unable to recover it. 00:28:51.573 [2024-05-15 01:31:27.067958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.068380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.573 [2024-05-15 01:31:27.068420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.068771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.069144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.069161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.069604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.070036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.070054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.070518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.070994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.071033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.071556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.071992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.072009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.072401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.072894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.072933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.073436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.073918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.073935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.074384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.074825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.074863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.075403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.075743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.075787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.076267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.076755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.076795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.077297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.077718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.077736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.078137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.078649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.078689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.079143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.079633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.079651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.080109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.080532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.080581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.081033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.081408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.081448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.081875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.082364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.082404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.082843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.083356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.083396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.083907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.084388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.084406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.084846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.085320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.085359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.085813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.086277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.086317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.086818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.087235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.087276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.087767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.088282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.088323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.088845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.089276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.089317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.089809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.090224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.090264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.090760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.091274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.091314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.091826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.092337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.092378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.092831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.093317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.093357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.093745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.094206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.094245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.094770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.095212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.095252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.095707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.096222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.096262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.574 qpair failed and we were unable to recover it. 00:28:51.574 [2024-05-15 01:31:27.096707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.097172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.574 [2024-05-15 01:31:27.097237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.097714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.098149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.098205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.098661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.099092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.099131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.099604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.100021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.100060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.100555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.101027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.101066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.101626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.102088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.102127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.102622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.103113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.103152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.103671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.104080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.104119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.104623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.105133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.105175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.105743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.106161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.106210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.106709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.107214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.107255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.107750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.108250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.108267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.108700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.109176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.109226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.109649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.110140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.110178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.110711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.111224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.111264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.111775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.112287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.112328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.112842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.113358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.113398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.113909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.114423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.114463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.114974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.115413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.115452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.115936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.116427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.116468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.117026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.117493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.117534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.117979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.118364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.118404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.118885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.119352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.119392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.119917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.120334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.120381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.575 qpair failed and we were unable to recover it. 00:28:51.575 [2024-05-15 01:31:27.120755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.575 [2024-05-15 01:31:27.121231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.121272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.121820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.122288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.122328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.122829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.123338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.123378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.123862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.124300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.124317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.124766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.125201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.125241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.125759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.126230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.126276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.126765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.127281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.127324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.127823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.128330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.128348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.128796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.129307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.129348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.129735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.130207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.130248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.130743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.131229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.131269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.131719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.132215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.132256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.132771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.133282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.133322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.133830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.134282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.134322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.134758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.135165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.135213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.135709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.136123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.136162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.136689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.137180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.137241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.137755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.138265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.138306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.138823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.139237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.139291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.139703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.140189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.140239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.140760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.141274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.141314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.141831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.142316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.142357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.142863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.143350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.143368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.143822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.144290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.144340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.144761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.145137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.145175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.145711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.146214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.146255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.146700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.147110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.147150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.147657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.148130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.148170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.148691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.149154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.149203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.149717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.150168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.150221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.576 qpair failed and we were unable to recover it. 00:28:51.576 [2024-05-15 01:31:27.150662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.576 [2024-05-15 01:31:27.151173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.151225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.151732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.152243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.152284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.152796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.153289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.153334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.153863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.154375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.154415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.154862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.155213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.155252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.155752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.156268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.156308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.156857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.157277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.157317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.157789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.158255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.158295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.158774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.159178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.159228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.159728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.160239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.160279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.160719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.161212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.161252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.161694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.162106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.162145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.162655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.163169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.163217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.163663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.164076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.164114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.164654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.165144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.165183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.165722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.166138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.166176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.166687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.167211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.167251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.167795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.168263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.168304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.168806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.169325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.169365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.169867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.170228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.170268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.170761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.171176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.171226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.171758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.172170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.172226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.172640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.173128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.173167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.173702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.174214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.174255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.174769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.175238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.175278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.175775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.176211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.176251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.176755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.177166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.177233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.177755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.178155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.178206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.178690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.179153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.179170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.179606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.180033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.180073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.577 qpair failed and we were unable to recover it. 00:28:51.577 [2024-05-15 01:31:27.180519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.577 [2024-05-15 01:31:27.180935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.180974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.181470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.181977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.182017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.182537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.183032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.183072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.183567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.184072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.184089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.184560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.185001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.185040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.185562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.186050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.186067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.186454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.186872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.186916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.187411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.187819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.187858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.188363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.188805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.188844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.189332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.189786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.189824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.190326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.190837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.190876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.191392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.191906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.191947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.192491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.192906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.192945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.193450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.193915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.193954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.194455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.194974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.195013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.195461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.195948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.195987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.196505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.196977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.197016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.197560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.198027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.198066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.198572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.199072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.199089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.199444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.199891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.199931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.200432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.200927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.200967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.201495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.201960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.201977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.202419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.202916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.202955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.203436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.203908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.203947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.204447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.204854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.204893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.205395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.205902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.205919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.206390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.206769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.206786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.207233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.207605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.207622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.207992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.208479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.208519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.209044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.209506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.209547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.578 qpair failed and we were unable to recover it. 00:28:51.578 [2024-05-15 01:31:27.209983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.578 [2024-05-15 01:31:27.210428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.210445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.210836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.211299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.211338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.211831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.212277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.212317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.212848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.213313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.213352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.213850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.214314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.214354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.214827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.215319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.215359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.215895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.216309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.216349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.216825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.217347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.217387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.217914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.218327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.218367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.218815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.219328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.219367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.219881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.220396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.220436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.220947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.221417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.221457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.221949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.222445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.222485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.222935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.223429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.223469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.223937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.224427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.224474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.224944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.225367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.225407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.225883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.226370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.226410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.226897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.227372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.227412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.227899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.228314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.228354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.228852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.229372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.229428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.229967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.230478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.230519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.231037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.231548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.231588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.232031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.232519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.232559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.233003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.233491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.233532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.234041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.234527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.234567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.579 [2024-05-15 01:31:27.235092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.235580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.579 [2024-05-15 01:31:27.235619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.579 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.236146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.236613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.236654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.237119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.237584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.237630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.238186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.238567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.238583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.239030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.239532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.239572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.240097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.240612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.240652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.241165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.241676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.241716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.242261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.242755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.242795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.243323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.243830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.243869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.244388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.244887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.244925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.245443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.245867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.245906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.246408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.246918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.246956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.247477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.247976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.248015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.248541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.249010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.249049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.249548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.250051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.250068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.250539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.250898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.250915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.251365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.251812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.251851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.252378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.252795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.252835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.253159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.253614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.253654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.254111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.254599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.580 [2024-05-15 01:31:27.254647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.580 qpair failed and we were unable to recover it. 00:28:51.580 [2024-05-15 01:31:27.255014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.255436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.255453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.255770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.256203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.256221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.256643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.257045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.257084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.257577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.258065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.258104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.258618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.259099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.259116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.259574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.260084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.260124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.260641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.261128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.261166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.261700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.262164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.262213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.262706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.263220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.263261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.846 qpair failed and we were unable to recover it. 00:28:51.846 [2024-05-15 01:31:27.263785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.846 [2024-05-15 01:31:27.264280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.264320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.264842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.265254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.265293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.265791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.266200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.266241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.266736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.267243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.267284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.267838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.268304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.268344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.268843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.269374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.269414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.269960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.270397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.270437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.270957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.271368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.271408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.271885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.272371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.272411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.272944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.273424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.273441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.273806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.274250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.274268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.274692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.275139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.275178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.275667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.276156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.276207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.276724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.277166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.277182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.277570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.278055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.278072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.278445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.278890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.278907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.279355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.279870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.279909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.280425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.280893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.280933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.281429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.281942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.281981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.282499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.282912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.282950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.283426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.283913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.283952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.284363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.284801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.284839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.285298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.285784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.285823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.286326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.286815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.286855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.287300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.287651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.287695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.288114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.288545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.288584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.289056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.289447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.289488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.289929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.290336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.290377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.290800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.291286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.291325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.291843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.292288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.292312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.847 qpair failed and we were unable to recover it. 00:28:51.847 [2024-05-15 01:31:27.292678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.847 [2024-05-15 01:31:27.293201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.293241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.293768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.294282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.294322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.294834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.295272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.295312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.295734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.296207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.296247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.296743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.297256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.297296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.297735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.298217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.298258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.298732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.299098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.299137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.299566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.300057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.300096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.300623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.301130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.301169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.301724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.302070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.302108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.302608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.303039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.303077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.303519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.303923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.303962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.304472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.304940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.304978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.305479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.305903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.305920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.306348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.306846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.306885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.307372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.307820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.307859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.308359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.308783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.308822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.309244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.309734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.309772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.310272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.310740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.310778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.311246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.311633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.311650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.312138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.312686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.312726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.313223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.313672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.313711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.314230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.314598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.314615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.315072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.315486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.848 [2024-05-15 01:31:27.315527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.848 qpair failed and we were unable to recover it. 00:28:51.848 [2024-05-15 01:31:27.316033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.316405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.316422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.316813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.317139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.317177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.317668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.318072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.318110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.318598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.319111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.319152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.319549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.320039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.320078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.320597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.320956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.320973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.321407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.321935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.321974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.322502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.322996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.323035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.323560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.324072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.324111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.324549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.325039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.325077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.325613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.326121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.326160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.326718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.327235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.327276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.327796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.328303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.328342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.328841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.329306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.329346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.329742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.330163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.330223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.330672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.331120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.331159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.331629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.332136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.332175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.332696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.333124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.333163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.333690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.334163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.334212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.334659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.335132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.335171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.335563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.335955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.335995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.336475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.336890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.336934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.849 qpair failed and we were unable to recover it. 00:28:51.849 [2024-05-15 01:31:27.337432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.849 [2024-05-15 01:31:27.337918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.337957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.338488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.338923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.338962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.339447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.339779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.339818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.340290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.340731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.340770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.341251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.341664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.341703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.342129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.342564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.342581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.343042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.343538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.343579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.344056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.344546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.344586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.345104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.345616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.345657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.346116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.346632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.346679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.347126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.347629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.347669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.348063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.348578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.348618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.349143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.349643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.349683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.350163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.350644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.350684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.351187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.351674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.351715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.352233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.352750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.352789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.353217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.353708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.353747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.354107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.354581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.354622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.355165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.355612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.355653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.356057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.356526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.356568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.356956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.357371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.357412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.357834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.358244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.358284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.850 [2024-05-15 01:31:27.358785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.359270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.850 [2024-05-15 01:31:27.359310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.850 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.359758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.360234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.360274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.360649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.361091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.361130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.361648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.362181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.362233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.362688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.363214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.363254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.363774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.364265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.364306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.364684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.365098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.365137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.365588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.365953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.365992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.366428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.366826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.366865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.367401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.367803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.367842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.368369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.368741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.368781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.369280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.369723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.369762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.370210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.370664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.370681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.371133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.371671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.371710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.372125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.372612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.372652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.373124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.373546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.373586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.374090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.374574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.374615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.375033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.375521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.375561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.376051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.376490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.376531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.376964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.377426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.377466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.377884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.378387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.378427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.851 qpair failed and we were unable to recover it. 00:28:51.851 [2024-05-15 01:31:27.378847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.379346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.851 [2024-05-15 01:31:27.379385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.379881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.380343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.380383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.380861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.381356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.381396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.381890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.382398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.382439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.382840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.383248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.383287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.383785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.384298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.384338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.384836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.385278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.385319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.385672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.386113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.386152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.386641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.387115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.387154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.387542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.388045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.388095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.388467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.388847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.388886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.389352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.389781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.389820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.390319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.390690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.390729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.391227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.391694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.391733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.392243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.392715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.392754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.393259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.393729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.393768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.394291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.394759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.852 [2024-05-15 01:31:27.394798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.852 qpair failed and we were unable to recover it. 00:28:51.852 [2024-05-15 01:31:27.395297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.395738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.395782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.396267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.396736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.396775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.397279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.397697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.397736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.398217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.398685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.398724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.399227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.399714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.399753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.400237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.400627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.400666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.401150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.401625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.401643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.402091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.402520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.402560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.403033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.403446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.403487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.403938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.404423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.404464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.404888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.405377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.405418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.405932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.406341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.406381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.406808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.407294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.407333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.407832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.408318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.408359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.408754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.409240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.409281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.409800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.410311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.410352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.410866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.411354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.411394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.411785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.412228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.412246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.412628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.413117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.413156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.413694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.414134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.414172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.414662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.415147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.415186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.415715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.416233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.416274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.416711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.417210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.853 [2024-05-15 01:31:27.417259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.853 qpair failed and we were unable to recover it. 00:28:51.853 [2024-05-15 01:31:27.417696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.418182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.418239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.418667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.419101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.419141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.419577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.420046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.420085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.420586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.420901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.420939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.421422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.421796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.421814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.422266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.422734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.422773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.423248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.423714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.423753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.424227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.424694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.424733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.425282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.425698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.425737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.426216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.426632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.426671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.427098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.427569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.427609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.428039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.428432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.428473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.428887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.429290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.429330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.429831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.430185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.430234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.430746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.431211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.431252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.431702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.432168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.432221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.432723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.433109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.433148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.433624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.434070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.434109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.434523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.434997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.435036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.435460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.435928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.435967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.436463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.436984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.437023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.437496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.437979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.438018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.438484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.438896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.438934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.854 [2024-05-15 01:31:27.439366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.439780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.854 [2024-05-15 01:31:27.439819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.854 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.440317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.440750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.440798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.441271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.441636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.441675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.442122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.442561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.442601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.443100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.443507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.443556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.444050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.444450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.444495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.444989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.445396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.445414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.445777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.446214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.446232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.446595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.446987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.447004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.447430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.447843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.447883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.448394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.448906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.448945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.449352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.449722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.449761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.450258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.450675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.450715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.451151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.451660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.451678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.452127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.452412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.452452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.452869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.453348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.453388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.453810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.454241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.454281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.454693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.455171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.455220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.455690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.456101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.456141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.456501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.456919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.456958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.457447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.457876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.457892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.458265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.458709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.458748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.459266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.459750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.855 [2024-05-15 01:31:27.459789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.855 qpair failed and we were unable to recover it. 00:28:51.855 [2024-05-15 01:31:27.460249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.460727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.460766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.461297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.461757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.461796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.462268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.462724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.462763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.463237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.463700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.463739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.464159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.464584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.464624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.465115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.465589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.465629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.466164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.466654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.466671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.467110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.467552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.467592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.468035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.468495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.468535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.468975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.469435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.469476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.470013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.470472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.470521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.470982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.471445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.471485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.471901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.472385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.472424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.472947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.473455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.473496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.474001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.474499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.474539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.474914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.475293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.475310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.475715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.476157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.476174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.476563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.476926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.476966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.477436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.477845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.477884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.478355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.478835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.478874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.479402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.479843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.856 [2024-05-15 01:31:27.479883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.856 qpair failed and we were unable to recover it. 00:28:51.856 [2024-05-15 01:31:27.480409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.480828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.480867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.481362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.481842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.481881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.482387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.482835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.482874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.483393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.483881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.483920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.484422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.484899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.484938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.485454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.485899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.485938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.486449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.486809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.486848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.487343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.487855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.487894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.488398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.488863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.488903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.489390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.489837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.489877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.490386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.490822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.490861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.491386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.491836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.491875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.492379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.492799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.492844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.493341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.493786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.493825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.494305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.494778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.494817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.495330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.495777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.495817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.496272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.496753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.496793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.497323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.497835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.497874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.498395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.498847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.498887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.499381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.499822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.499839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.500273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.500660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.500699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.501149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.501591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.857 [2024-05-15 01:31:27.501632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.857 qpair failed and we were unable to recover it. 00:28:51.857 [2024-05-15 01:31:27.502133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.502615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.502661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.503158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.503634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.503675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.504176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.504610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.504650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.505073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.505562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.505603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.506111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.506624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.506664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.507177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.507632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.507672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.508099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.508504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.508544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.509044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.509539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.509579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.510059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.510421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.510462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.510796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.511184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.511207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.511580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.512006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.512045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.512557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.512745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.512762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.513217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.513704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.513744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.514206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.514664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.514704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.515211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.515623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.515662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.516082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.516328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.516368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.858 qpair failed and we were unable to recover it. 00:28:51.858 [2024-05-15 01:31:27.516859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.517361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.858 [2024-05-15 01:31:27.517401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.517662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.518148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.518187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.518632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 79621 Killed "${NVMF_APP[@]}" "$@" 00:28:51.859 [2024-05-15 01:31:27.519037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.519076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.519564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.519933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.519950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:28:51.859 [2024-05-15 01:31:27.520327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:51.859 [2024-05-15 01:31:27.520747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.520765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:51.859 [2024-05-15 01:31:27.521116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:51.859 [2024-05-15 01:31:27.521537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.521556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.859 [2024-05-15 01:31:27.521920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.522228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.522246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.522631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.523023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.523062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.523540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.523845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.523862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.524250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.524695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.524711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.525180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.525626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.525644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.526078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.526490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.526507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.526822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=80445 00:28:51.859 [2024-05-15 01:31:27.527175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.527200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 80445 00:28:51.859 [2024-05-15 01:31:27.527644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 80445 ']' 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.859 [2024-05-15 01:31:27.528050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.528090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:51.859 [2024-05-15 01:31:27.528530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:51.859 [2024-05-15 01:31:27.528966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.528984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 01:31:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.859 [2024-05-15 01:31:27.529396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.529750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.529768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.530129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.530494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.859 [2024-05-15 01:31:27.530511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.859 qpair failed and we were unable to recover it. 00:28:51.859 [2024-05-15 01:31:27.530875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.860 [2024-05-15 01:31:27.531286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:51.860 [2024-05-15 01:31:27.531305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:51.860 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.531737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.532094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.532112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.532475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.532783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.532800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.533155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.533376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.533392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.533744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.534045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.534062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.534417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.534715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.534732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.535085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.535438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.535454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.535902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.536212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.536228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.536518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.536876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.536892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.537322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.537672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.537688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.538047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.538456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.538473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.538843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.539120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.539137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.539547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.539880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.539896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.540355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.540722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.540739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.541127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.541554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.541571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.541853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.542263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.542280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.542625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.542971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.542987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.543421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.543709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.543726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.544157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.544563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.544580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.544800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.545227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.545244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.126 qpair failed and we were unable to recover it. 00:28:52.126 [2024-05-15 01:31:27.545591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.126 [2024-05-15 01:31:27.545859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.545875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.546141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.546472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.546489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.546912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.547343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.547359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.547795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.548086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.548108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.548556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.548960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.548976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.549429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.549859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.549875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.550286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.550640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.550657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.550944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.551351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.551367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.551817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.552180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.552213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.552571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.553020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.553037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.553449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.553791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.553808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.554189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.554620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.554636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.555073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.555455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.555471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.555839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.556248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.556265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.556701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.557047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.557063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.557370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.557798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.557815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.558203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.558627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.558644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.559018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.559384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.559401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.559833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.560168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.560184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.560611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.560968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.560984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.561330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.561693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.561709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.562059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.562407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.562424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.562602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.562979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.562995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.563342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.563690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.563706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.564070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.564210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.564226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.564687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.565059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.565075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.565418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.565781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.565797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.566090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.566435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.566451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.566885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.567315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.567332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.567743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.567999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.127 [2024-05-15 01:31:27.568015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.127 qpair failed and we were unable to recover it. 00:28:52.127 [2024-05-15 01:31:27.568384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.568769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.568787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.569147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.569557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.569574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.569991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.570330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.570347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.570741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.571010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.571026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.571408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.571816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.571832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.572201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.572501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.572517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.572872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.573179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.573201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.573559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.573844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.573860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.574147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.574486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.574503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.574956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.575227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.575243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.575615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.575972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.575988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.576411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.576691] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:28:52.128 [2024-05-15 01:31:27.576729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.576741] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:52.128 [2024-05-15 01:31:27.576747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.577111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.577538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.577554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.577969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.578313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.578331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.578492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.578900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.578916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.579255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.579616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.579632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.580065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.580487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.580505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.580862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.581200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.581218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.581575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.581929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.581946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.582372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.582776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.582803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.583146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.583521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.583538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.583844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.584205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.584222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.584584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.584932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.584948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.585294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.585703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.585722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.586079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.586433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.586451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.586817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.587222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.587239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.587634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.587929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.587946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.588290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.588648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.588665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.588989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.589421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.128 [2024-05-15 01:31:27.589437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.128 qpair failed and we were unable to recover it. 00:28:52.128 [2024-05-15 01:31:27.589806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.590237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.590254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.590473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.590911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.590928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.591279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.591714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.591730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.591952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.592390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.592408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.592831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.593215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.593231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.593655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.594060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.594076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.594480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.594831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.594848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.595264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.595694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.595710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.595852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.596258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.596275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.596709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.597115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.597131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.597485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.597890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.597906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.598312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.598752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.598768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.599134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.599542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.599558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.599991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.600421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.600438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.600843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.601200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.601216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.601651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.602056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.602072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.602471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.602839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.602855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.603285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.603718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.603734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.604093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.604449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.604466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.604805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.605108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.605125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.605550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.605991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.606007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.606417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.606784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.606801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.607166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.607584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.607601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.608030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.608312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.608330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.608611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.608985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.609001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.609461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.609819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.609836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.610125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.610512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.610529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.610810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.611248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.611265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.611571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.611893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.611910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.129 [2024-05-15 01:31:27.612271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.612665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.129 [2024-05-15 01:31:27.612682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.129 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.612975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.613427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.613444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.613877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.130 [2024-05-15 01:31:27.614233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.614252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.614655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.615100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.615116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.615570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.615872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.615888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.616236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.616585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.616601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.616954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.617323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.617340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.617636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.618041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.618058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.618439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.618723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.618739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.618989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.619410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.619426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.619791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.620201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.620218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.620641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.621046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.621062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.621264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.621677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.621693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.622030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.622459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.622475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.622910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.623196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.623213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.623412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.623750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.623766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.624200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.624486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.624502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.624910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.625335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.625352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.625708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.626125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.626142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.626570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.626955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.626971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.627327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.627765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.627781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.628212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.628659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.628675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.629029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.629456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.629473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.629880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.630156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.630172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.130 [2024-05-15 01:31:27.630399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.630740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.130 [2024-05-15 01:31:27.630756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.130 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.631116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.631399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.631416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.631843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.632273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.632292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.632653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.632988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.633004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.633357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.633506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.633522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.633903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.634027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.634043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.634475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.634901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.634917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.635351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.635617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.635633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.635922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.636373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.636389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.636744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.637188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.637212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.637547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.637833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.637849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.638208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.638555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.638571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.638860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.639264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.639281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.639666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.640013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.640029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.640299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.640720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.640736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.641154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.641404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.641420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.641780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.642219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.642236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.642592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.642950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.642966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.643361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.643716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.643733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.644159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.644531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.644548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.644830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.645256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.645273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.645721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.646053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.646069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.646504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.646910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.646926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.647264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.647691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.647707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.648058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.648394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.648411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.648721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.649002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.649017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.649451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.649879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.649895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.650325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.650749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.650765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.131 qpair failed and we were unable to recover it. 00:28:52.131 [2024-05-15 01:31:27.651073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.651521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.131 [2024-05-15 01:31:27.651537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.651968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.652397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.652413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.652798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.653073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.653089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.653503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.653931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.653947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.654300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.654643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.654659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.655113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.655394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.655411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.655795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.656213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.656229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.656660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.657039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.657056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.657349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.657708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.657725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.658060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.658408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.658424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.658854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.659284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.659300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.659707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.660020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.660036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.660411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.660839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.660855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.661288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.661625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.661641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.662028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.662376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.662393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.662750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.663114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.663130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.663487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.663824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.663840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.664215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.664507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.664523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.664880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.665282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.665298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.665732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.666017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.666033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.666317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.666747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.666763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.667105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.667544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.667560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.667821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:52.132 [2024-05-15 01:31:27.667972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.668304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.668321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.668676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.669084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.669102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.669538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.669838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.669855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.670284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.670639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.670656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.132 [2024-05-15 01:31:27.671092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.671440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.132 [2024-05-15 01:31:27.671458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.132 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.671909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.672262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.672280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.672574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.673002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.673019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.673356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.673709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.673725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.674164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.674581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.674598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.675036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.675387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.675404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.675834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.676172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.676188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.676562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.676925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.676941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.677278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.677699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.677716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.678145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.678549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.678569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.679024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.679359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.679376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.679804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.680156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.680172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.680543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.680920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.680936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.681135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.681539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.681556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.682012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.682424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.682440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.682792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.683171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.683188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.683577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.683980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.683996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.684265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.684667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.684683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.685111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.685533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.685551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.685896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.686090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.686107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.686475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.686783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.686799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.687142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.687563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.687580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.688012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.688462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.688478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.688821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.689182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.689202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.689398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.689816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.689832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.690185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.690556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.690572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.690928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.691285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.691302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.691596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.691872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.691888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.133 qpair failed and we were unable to recover it. 00:28:52.133 [2024-05-15 01:31:27.692299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.133 [2024-05-15 01:31:27.692725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.692741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.693149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.693577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.693593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.694055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.694459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.694476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.694767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.695147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.695163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.695602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.695903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.695919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.696283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.696632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.696649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.697006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.697356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.697372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.697780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.698188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.698210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.698588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.699012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.699028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.699370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.699774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.699790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.700145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.700498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.700514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.700867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.701272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.701289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.701640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.702071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.702091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.702455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.702815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.702832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.703196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.703598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.703615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.704041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.704467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.704485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.704843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.705218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.705238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.705652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.705962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.705980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.706344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.706753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.706772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.707149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.707454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.707472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.707931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.708277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.708294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.708646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.708985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.709002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.709365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.709772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.709789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.710125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.710531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.710550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.710771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.711149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.711168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.134 qpair failed and we were unable to recover it. 00:28:52.134 [2024-05-15 01:31:27.711588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.134 [2024-05-15 01:31:27.711924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.711943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.712314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.712670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.712688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.713127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.713490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.713508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.713716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.714089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.714106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.714458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.714886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.714902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.715251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.715608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.715624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.716006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.716409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.716426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.716792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.717149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.717165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.717600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.718029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.718045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.718347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.718652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.718668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.719018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.719419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.719436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.719794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.720199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.720215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.720623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.721056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.721071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.721526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.721804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.721820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.722156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.722422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.722439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.722843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.723119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.723135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.723488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.723853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.723869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.724277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.724623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.724639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.724999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.725439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.725456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.135 qpair failed and we were unable to recover it. 00:28:52.135 [2024-05-15 01:31:27.725866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.135 [2024-05-15 01:31:27.726290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.726307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.726668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.727094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.727110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.727553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.727889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.727905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.728260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.728694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.728710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.729009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.729345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.729361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.729779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.730213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.730231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.730640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.730849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.730865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.731270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.731643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.731659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.732000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.732386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.732402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.732833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.733176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.733195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.733627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.734053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.734069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.734503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.734858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.734874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.735306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.735729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.735745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.736121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.736549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.736565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.736905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.737311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.737331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.737760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.738184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.738213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.738493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.738866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.738882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.739323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.739747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.739766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.740126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.740558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.740576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.741007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.741011] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:52.136 [2024-05-15 01:31:27.741044] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:52.136 [2024-05-15 01:31:27.741053] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:52.136 [2024-05-15 01:31:27.741062] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:52.136 [2024-05-15 01:31:27.741069] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:52.136 [2024-05-15 01:31:27.741239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:28:52.136 [2024-05-15 01:31:27.741369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.741385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.741317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:28:52.136 [2024-05-15 01:31:27.741407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:28:52.136 [2024-05-15 01:31:27.741408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:28:52.136 [2024-05-15 01:31:27.741822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.742153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.136 [2024-05-15 01:31:27.742169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.136 qpair failed and we were unable to recover it. 00:28:52.136 [2024-05-15 01:31:27.742583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.742996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.743012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.743423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.743856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.743873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.744209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.744498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.744514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.744925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.745257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.745273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.745695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.746044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.746060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.746468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.746823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.746839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.747119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.747451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.747468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.747929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.748282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.748299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.748716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.749144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.749160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.749647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.750057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.750074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.750377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.750726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.750743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.751171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.751615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.751632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.752013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.752363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.752381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.752595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.752923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.752940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.753351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.753778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.753795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.754260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.754716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.754734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.755099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.755503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.755521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.755959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.756315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.756332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.756739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.757034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.757050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.757471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.757847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.757864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.758304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.758732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.758750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.759187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.759555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.759572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.759951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.760245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.760263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.760629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.761008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.761027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.761377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.761739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.761757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.762201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.762650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.762667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.763057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.763478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.137 [2024-05-15 01:31:27.763495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.137 qpair failed and we were unable to recover it. 00:28:52.137 [2024-05-15 01:31:27.763788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.764166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.764182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.764541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.764949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.764965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.765325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.765755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.765771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.766138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.766512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.766529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.766935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.767364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.767381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.767680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.768109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.768127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.768483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.768838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.768855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.769286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.769626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.769643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.770074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.770424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.770445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.770881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.771286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.771305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.771741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.772090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.772108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.772538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.772968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.772986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.773420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.773778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.773795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.774228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.774593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.774612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.775019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.775449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.775466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.775801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.776220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.776238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.776670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.777011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.777028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.777411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.777860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.777877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.778254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.778688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.778703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.779001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.779418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.779435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.779819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.780201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.780217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.780570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.781012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.781028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.781248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.781580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.781597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.781910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.782316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.782334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.782765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.783237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.783254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.138 qpair failed and we were unable to recover it. 00:28:52.138 [2024-05-15 01:31:27.783539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.138 [2024-05-15 01:31:27.783885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.783902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.784260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.784609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.784626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.784920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.785262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.785279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.785619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.786053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.786070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.786501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.786834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.786852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.787204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.787577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.787593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.788016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.788373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.788390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.788736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.789165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.789182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.789550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.789906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.789922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.790331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.790667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.790683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.791159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.791523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.791540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.791874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.792296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.792315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.792732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.793139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.793156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.793511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.793796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.793812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.794031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.794438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.794456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.794847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.795277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.795294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.795705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.796000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.796017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.796373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.796722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.796739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.797106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.797405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.797421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.797761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.798188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.798209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.798638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.798990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.799006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.799347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.799682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.799697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.799997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.800334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.800350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.800769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.801117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.801133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.801567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.801941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.801957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.802266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.802699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.802715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.139 [2024-05-15 01:31:27.803065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.803492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.139 [2024-05-15 01:31:27.803509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.139 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.803940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.804320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.804336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.804719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.805077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.805093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.805405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.805832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.805848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.806309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.806656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.806672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.807106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.807514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.807530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.807886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.808258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.808274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.808565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.808842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.808858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.809223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.809582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.140 [2024-05-15 01:31:27.809601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.140 qpair failed and we were unable to recover it. 00:28:52.140 [2024-05-15 01:31:27.809956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.810360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.810377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.810807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.811212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.811228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.811637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.812061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.812077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.812380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.812746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.812762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.813147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.813539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.813555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.813853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.814209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.814226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.814587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.814939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.814955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.815368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.815772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.815788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.816232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.816578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.816594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.816938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.817283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.817302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.817731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.818080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.818096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.818450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.818855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.818871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.819300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.819641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.819656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.820030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.820246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.820262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.407 qpair failed and we were unable to recover it. 00:28:52.407 [2024-05-15 01:31:27.820558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.820918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.407 [2024-05-15 01:31:27.820934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.821281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.821625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.821641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.822008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.822432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.822448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.822854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.823281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.823298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.823657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.824067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.824083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.824489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.824836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.824851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.825201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.825497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.825513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.825871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.826273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.826290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.826698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.826911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.826927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.827347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.827653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.827669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.828078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.828463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.828479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.828911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.829334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.829350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.829659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.829854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.829870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.830279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.830645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.830661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.831000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.831435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.831451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.831875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.832230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.832246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.832603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.833009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.833025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.833382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.833806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.833822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.834228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.834657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.834673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.835028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.835449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.835466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.835875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.836121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.836137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.836447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.836852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.836868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.837224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.837577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.837593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.837967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.838310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.838327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.838676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.839079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.839095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.839432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.839577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.839593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.839953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.840357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.840373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.840801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.841225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.841242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.841647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.841980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.841996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.842418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.842628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.842645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.408 qpair failed and we were unable to recover it. 00:28:52.408 [2024-05-15 01:31:27.842979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.408 [2024-05-15 01:31:27.843405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.843422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.843887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.844218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.844234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.844638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.845002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.845018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.845222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.845684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.845700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.846034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.846382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.846398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.846767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.847105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.847121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.847529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.847980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.847996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.848428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.848857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.848873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.849305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.849710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.849726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.850134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.850402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.850418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.850868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.851088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.851104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.851510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.851842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.851858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.852207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.852636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.852652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.852865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.853213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.853230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.853550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.853976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.853992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.854204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.854582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.854598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.854940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.855242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.855261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.855661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.856066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.856082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.856447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.856849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.856865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.857293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.857665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.857681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.858113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.858412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.858428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.858787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.859118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.859135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.859568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.859929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.859944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.860292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.860689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.860705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.861154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.861493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.861509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.861938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.862242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.862258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.862553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.862981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.862997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.863420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.863847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.863863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.864321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.864725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.864741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.865147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.865494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.409 [2024-05-15 01:31:27.865510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.409 qpair failed and we were unable to recover it. 00:28:52.409 [2024-05-15 01:31:27.865888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.866242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.866258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.866674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.867104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.867121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.867570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.868001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.868017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.868374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.868721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.868737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.869106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.869486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.869502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.869930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.870377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.870393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.870746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.871077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.871093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.871525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.871719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.871735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.872189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.872472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.872488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.872919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.873358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.873374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.873781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.874090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.874106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.874458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.874889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.874905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.875280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.875728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.875744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.876091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.876518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.876534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.876892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.877221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.877237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.877673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.878109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.878125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.878532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.878910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.878926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.879336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.879739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.879755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.880172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.880532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.880548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.880981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.881329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.881346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.881773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.882119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.882136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.882544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.882828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.882844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.883277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.883700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.883716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.884121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.884495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.884511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.884807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.885164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.885180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.885532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.885965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.885982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.886390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.886762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.886778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.887179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.887312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.887329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.887743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.888101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.888117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.410 [2024-05-15 01:31:27.888531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.888873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.410 [2024-05-15 01:31:27.888889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.410 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.889155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.889598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.889615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.889989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.890326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.890343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.890751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.891174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.891208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.891566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.891901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.891917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.892362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.892733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.892750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.893182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.893540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.893556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.893987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.894391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.894407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.894861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.895193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.895212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.895644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.895991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.896007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.896311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.896505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.896522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.896856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.897261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.897277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.897645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.898013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.898029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.898385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.898745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.898761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.899118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.899549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.899565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.899909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.900334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.900350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.900724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.901070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.901086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.901430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.901855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.901871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.902066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.902499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.902515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.902891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.903296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.903312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.903673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.904111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.904127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.904423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.904825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.904841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.905270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.905699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.905715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.906067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.906492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.906509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.906935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.907383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.907399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.411 qpair failed and we were unable to recover it. 00:28:52.411 [2024-05-15 01:31:27.907697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.908108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.411 [2024-05-15 01:31:27.908125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.908532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.908890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.908906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.909268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.909671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.909687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.910122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.910405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.910421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.910855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.911281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.911297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.911705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.912133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.912148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.912347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.912717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.912734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.913077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.913502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.913519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.913881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.914304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.914320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.914671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.915075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.915091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.915468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.915817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.915833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.916268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.916678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.916694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.917150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.917527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.917543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.917878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.918302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.918318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.918671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.919110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.919126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.919500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.919923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.919939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.920302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.920662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.920678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.921116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.921541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.921557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.921917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.922323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.922340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.922768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.923125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.923141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.923571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.923996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.924012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.924418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.924820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.924837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.925269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.925694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.925711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.926015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.926441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.926458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.926884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.927289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.927306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.927736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.928136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.928152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.928562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.928988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.929004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.929339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.929644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.929660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.930092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.930462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.930478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.412 [2024-05-15 01:31:27.930885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.931255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.412 [2024-05-15 01:31:27.931271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.412 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.931673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.932077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.932093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.932522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.932926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.932942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.933347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.933696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.933712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.934121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.934499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.934515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.934971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.935386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.935405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.935837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.936187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.936207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.936561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.936965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.936981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.937409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.937843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.937859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.938295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.938699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.938715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.939169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.939621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.939637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.940064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.940438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.940454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.940806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.941236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.941253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.941607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.942034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.942049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.942353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.942700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.942716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.943148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.943552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.943570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.943974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.944402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.944419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.944806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.945106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.945123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.945553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.945908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.945924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.946335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.946762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.946779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.947088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.947488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.947505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.947879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.948305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.948321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.948693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.949119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.949135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.949441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.949740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.949756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.950188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.950456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.950472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.950841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.951201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.951217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.951656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.952027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.952044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.952475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.952766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.952782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.953207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.953400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.953416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.953853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.954257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.954273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.413 [2024-05-15 01:31:27.954710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.955134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.413 [2024-05-15 01:31:27.955150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.413 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.955421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.955707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.955722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.956102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.956244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.956260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.956630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.956982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.956998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.957373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.957819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.957836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.958201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.958553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.958569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.958931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.959216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.959232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.959599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.960030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.960046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.960473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.960848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.960864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.961062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.961465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.961482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.961786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.962195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.962212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.962640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.963044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.963061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.963414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.963817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.963833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.964265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.964613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.964629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.965058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.965404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.965421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.965704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.966132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.966148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.966552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.966960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.966977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.967317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.967596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.967612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.968040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.968449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.968465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.968900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.969248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.969264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.969693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.970064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.970080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.970524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.970826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.970843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.971209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.971653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.971669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.972042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.972465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.972482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.972890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.973294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.973311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.973747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.974150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.974166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.974602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.975007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.975023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.975384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.975755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.975771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.976125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.976545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.976562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.976993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.977396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.977413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.414 qpair failed and we were unable to recover it. 00:28:52.414 [2024-05-15 01:31:27.977820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.978246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.414 [2024-05-15 01:31:27.978262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.978622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.979047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.979063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.979492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.979864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.979880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.980167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.980599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.980615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.980777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.981208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.981224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.981609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.982071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.982088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.982543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.982947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.982966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.983399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.983822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.983838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.984220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.984562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.984578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.985015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.985444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.985461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.985895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.986244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.986261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.986596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.986974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.986990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.987406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.987757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.987773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.988068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.988415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.988431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.988839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.989266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.989282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.989740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.990020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.990036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.990445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.990792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.990809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.991239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.991594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.991611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.991953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.992380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.992397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.992731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.993156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.993172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.993597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.993955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.993971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.994347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.994725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.994741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.995121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.995545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.995561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.995911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.996311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.996327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.996758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.997160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.997176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.997634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.997998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.998014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.998371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.998808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.998824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:27.999203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.999626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:27.999642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:28.000001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:28.000351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:28.000367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:28.000819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:28.001225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:28.001241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.415 [2024-05-15 01:31:28.001594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:28.001939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.415 [2024-05-15 01:31:28.001955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.415 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.002404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.002768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.002784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.003215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.003642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.003658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.004036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.004467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.004484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.004843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.005247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.005264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.005693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.006119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.006135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.006470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.006807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.006823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.007038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.007458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.007474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.007886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.008272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.008288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.008736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.009160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.009176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.009613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.009950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.009966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.010395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.010676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.010693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.011045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.011391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.011407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.011748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.012126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.012142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.012559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.012935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.012951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.013359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.013691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.013708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.014090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.014515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.014532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.014940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.015313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.015330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.015742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.016027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.016043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.016474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.016827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.016843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.017189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.017480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.017496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.017859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.018286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.018303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.018754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.019158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.019174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.019613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.020040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.020057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.020416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.020785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.020801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.021229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.021659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.416 [2024-05-15 01:31:28.021675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.416 qpair failed and we were unable to recover it. 00:28:52.416 [2024-05-15 01:31:28.022052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.022400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.022417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.022798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.023199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.023218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.023624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.023963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.023979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.024405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.024748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.024764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.025198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.025574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.025590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.026034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.026441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.026458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.026795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.027142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.027158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.027513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.027883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.027900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.028280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.028647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.028663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.029097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.029497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.029513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.029870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.030295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.030312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.030720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.030921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.030937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.031325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.031691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.031707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.032044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.032414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.032430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.032864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.033199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.033215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.033592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.033997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.034013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.034419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.034871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.034888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.035298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.035702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.035718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.036125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.036472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.036488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.036920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.037339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.037355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.037705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.037915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.037931] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.038363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.038741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.038758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.039115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.039568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.039585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.039942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.040258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.040275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.040645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.041082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.041099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.041507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.041862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.041878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.042098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.042377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.042393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.042804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.043157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.043173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.043528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.043931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.043947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.044378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.044517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.417 [2024-05-15 01:31:28.044533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.417 qpair failed and we were unable to recover it. 00:28:52.417 [2024-05-15 01:31:28.044872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.045279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.045296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.045763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.046168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.046185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.046579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.046930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.046947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.047380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.047780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.047796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.048208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.048511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.048528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.048892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.049267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.049283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.049642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.049995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.050011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.050302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.050637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.050654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.050960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.051245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.051262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.051612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.052049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.052065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.052351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.052694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.052710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.053139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.053511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.053527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.053879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.054165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.054181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.054622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.054887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.054904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.055314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.055654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.055670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.056033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.056383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.056400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.056776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.057111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.057128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.057491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.057854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.057870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.058280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.058706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.058722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.059033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.059295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.059312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.059643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.060013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.060030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.060468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.060816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.060832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.061208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.061657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.061675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.061982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.062285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.062301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.062584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.062947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.062964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.063266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.063458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.063474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.063906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.064210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.064226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.064586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.065014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.065030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.065418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.065844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.065860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.066161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.066404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.418 [2024-05-15 01:31:28.066420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.418 qpair failed and we were unable to recover it. 00:28:52.418 [2024-05-15 01:31:28.066839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.067261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.067278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.067642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.068046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.068062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.068356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.068622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.068639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.068927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.069299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.069316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.069626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.069970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.069986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.070274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.070639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.070655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.071066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.071405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.071422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.071768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.072107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.072123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.072581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.073007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.073024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.073373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.073586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.073603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.073905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.074260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.074277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.074555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.074769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.074785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.075141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.075490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.075506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.075920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.076206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.076223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.076565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.076898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.076915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.077281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.077619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.077636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.077911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.078197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.078214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.078641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.078923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.078940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.079222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.079533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.079549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.079852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.080118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.080134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.080454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.080859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.080876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.081309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.081681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.081697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.082106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.082298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.082315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.082772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.083150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.083169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.083623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.083884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.083900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.084332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.084684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.084700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.085002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.085418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.085435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.085846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.086147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.086163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.086530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.086872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.086888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.087028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.087380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.419 [2024-05-15 01:31:28.087396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.419 qpair failed and we were unable to recover it. 00:28:52.419 [2024-05-15 01:31:28.087813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.088132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.088148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.420 qpair failed and we were unable to recover it. 00:28:52.420 [2024-05-15 01:31:28.088505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.088837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.088853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.420 qpair failed and we were unable to recover it. 00:28:52.420 [2024-05-15 01:31:28.089083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.089506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.089524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.420 qpair failed and we were unable to recover it. 00:28:52.420 [2024-05-15 01:31:28.089877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.090088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.090104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.420 qpair failed and we were unable to recover it. 00:28:52.420 [2024-05-15 01:31:28.090512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.090809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.420 [2024-05-15 01:31:28.090826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.420 qpair failed and we were unable to recover it. 00:28:52.420 [2024-05-15 01:31:28.091197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.091566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.091582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.091947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.092298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.092314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.092673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.092962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.092978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.093330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.093685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.093701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.094003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.094298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.094314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.094742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.095044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.095059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.095416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.095819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.095835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.096117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.096480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.096497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.096869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.097224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.097241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.097587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.097949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.687 [2024-05-15 01:31:28.097965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.687 qpair failed and we were unable to recover it. 00:28:52.687 [2024-05-15 01:31:28.098330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.098671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.098688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.099067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.099335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.099352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.099696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.099985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.100000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.100376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.100783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.100799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.101144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.101511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.101527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.101869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.102162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.102178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.102455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.102804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.102820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.102981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.103362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.103378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.103672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.104109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.104125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.104488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.104845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.104862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.105290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.105693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.105709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.106074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.106421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.106437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.106729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.107133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.107149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.107557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.107917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.107933] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.108147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.108601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.108617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.109048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.109251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.109268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.109621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.109888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.109903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.110288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.110639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.110655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.110984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.111269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.111286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.111660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.112064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.112080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.112415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.112675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.112691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.112969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.113324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.113341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.113620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.113966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.113982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.114336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.114751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.114767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.115106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.115511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.115528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.115864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.116266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.116283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.116477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.116869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.116885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.117178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.117526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.117543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.117894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.118234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.118250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.688 qpair failed and we were unable to recover it. 00:28:52.688 [2024-05-15 01:31:28.118599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.688 [2024-05-15 01:31:28.119003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.119019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.119456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.119789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.119805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.120152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.120493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.120509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.120929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.121269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.121285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.121640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.121879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.121895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.122287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.122627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.122643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.122946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.123306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.123322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.123687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.124103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.124119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.124526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.124732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.124748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.125101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.125473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.125493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.125898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.126259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.126275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.126616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.127060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.127076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.127437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.127805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.127821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.128160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.128523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.128540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.128987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.129363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.129380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.129735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.130110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.130125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.130510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.130948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.130964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.131322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.131677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.131693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.132100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.132430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.132446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.132887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.133251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.133269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.133647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.134023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.134039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.134377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.134735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.134752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.135041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.135397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.135414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.135701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.136005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.136021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.136448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.136790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.136806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.137113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.137390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.137406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.137771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.138117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.138133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.138268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.138694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.138710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.139094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.139455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.139471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.689 [2024-05-15 01:31:28.139853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.140140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.689 [2024-05-15 01:31:28.140158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.689 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.140498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.140846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.140862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.141233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.141524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.141541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.141880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.142223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.142240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.142530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.142789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.142805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.143221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.143590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.143606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.143991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.144295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.144311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.144720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.145058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.145074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.145422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.145827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.145843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.146182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.146539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.146556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.146923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.147308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.147326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.147714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.148147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.148162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.148456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.148805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.148822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.149202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.149546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.149562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.149901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.150353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.150369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.150720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.151067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.151083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.151410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.151830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.151846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.152052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.152334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.152351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.152764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.153112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.153129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.153555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.153959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.153975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.154179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.154464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.154480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.154755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.155103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.155119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.155467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.155903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.155920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.156284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.156644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.156660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.157041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.157389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.157405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.157677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.158082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.158098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.158446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.158722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.158738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.159039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.159466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.159482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.159769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.160030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.160046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.690 [2024-05-15 01:31:28.160434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.160835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.690 [2024-05-15 01:31:28.160851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.690 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.161141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.161501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.161517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.161826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.162108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.162124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.162484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.162923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.162940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.163320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.163601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.163617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.163984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.164324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.164340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.164702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.165039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.165055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.165431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.165788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.165804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.166113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.166402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.166418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.166753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.167044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.167061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.167412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.167759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.167775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.168076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.168425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.168442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.168806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.169184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.169205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.169674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.170017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.170033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.170379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.170749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.170765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.171118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.171564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.171581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.171878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.172288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.172304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.172712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.173062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.173078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.173431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.173813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.173829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.174253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.174555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.174571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.174837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.174991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.175007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.175367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.175713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.175730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.176163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.176539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.176556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.176925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.177293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.177309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.177740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.178079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.178095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.178503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.178850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.178866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.179152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.179296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.179312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.179674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.180079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.180094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.180487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.180819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.180835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.181141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.181521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.181538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.691 qpair failed and we were unable to recover it. 00:28:52.691 [2024-05-15 01:31:28.181948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.691 [2024-05-15 01:31:28.182352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.182369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.182799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.183162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.183178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.183488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.183841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.183857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.184149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.184448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.184464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.184846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.185266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.185283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.185715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.186006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.186021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.186364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.186702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.186718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.187109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.187437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.187454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.187806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.188253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.188269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.188698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.188892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.188908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.189260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.189680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.189696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.190074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.190423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.190440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.190740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.191000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.191016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.191362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.191769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.191785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.192214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.192567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.192583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.192889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.193171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.193187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.193549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.193891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.193907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.194272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.194624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.194639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.194937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.195287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.195303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.195650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.195932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.195948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.196340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.196671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.196687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.196984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.197334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.197350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.197710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.198076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.198093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.692 qpair failed and we were unable to recover it. 00:28:52.692 [2024-05-15 01:31:28.198404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.692 [2024-05-15 01:31:28.198810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.198826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.199106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.199392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.199408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.199853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.200221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.200238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.200520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.200790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.200806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.201105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.201400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.201416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.201848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.202227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.202244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.202548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.202815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.202832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.203189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.203483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.203499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.203777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.204066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.204082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.204391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.204675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.204692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.205035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.205302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.205319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.205734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.206090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.206106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.206479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.206884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.206900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.207277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.207630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.207646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.207962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.208230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.208246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.208517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.208869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.208885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.209178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.209511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.209528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.209909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.210188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.210209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.210517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.210811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.210827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.211121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.211537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.211554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.212011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.212418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.212434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.212793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.213085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.213101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.213410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.213779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.213796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.214071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.214475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.214491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.214870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.215316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.215332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.215739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.216115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.216131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.216493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.216844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.216859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.217137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.217351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.217367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.217774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.218047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.218063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.218403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.218667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.693 [2024-05-15 01:31:28.218683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.693 qpair failed and we were unable to recover it. 00:28:52.693 [2024-05-15 01:31:28.218975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.219183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.219204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.219344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.219703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.219725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.220030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.220401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.220418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.220699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.221076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.221092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.221501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.221881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.221897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.222182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.222594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.222610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.222972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.223420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.223437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.223810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.224215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.224232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.224582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.224933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.224949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.225324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.225666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.225683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.225978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.226269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.226285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.226641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.226796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.226813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.227228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.227515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.227531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.227890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.228258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.228275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.228579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.228862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.228878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.229164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.229530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.229546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.229909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.230319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.230335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.230640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.231011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.231027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.231312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.231669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.231686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.232097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.232531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.232548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.233002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.233286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.233303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.233644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.233929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.233945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.234237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.234543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.234560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.234867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.235023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.235039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.235378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.235513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.235530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.235950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.236248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.236265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.236604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.236952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.236969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.237356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.237655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.237680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.238025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.238424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.238441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.694 qpair failed and we were unable to recover it. 00:28:52.694 [2024-05-15 01:31:28.238776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.694 [2024-05-15 01:31:28.239137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.239163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.239541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.239897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.239914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.240260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.240601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.240617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.240924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.241216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.241232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.241664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.242011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.242026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.242238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.242526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.242542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.242686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.242963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.242979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.243278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.243564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.243580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.243873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.244216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.244233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.244576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.244860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.244876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.245285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.245616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.245634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.245994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.246287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.246303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.246728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.247075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.247091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.247381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.247669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.247686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.248018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.248282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.248299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.248732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.248997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.249013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.249353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.249648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.249664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.249805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.250145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.250162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.250518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.250873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.250888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.251163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.251554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.251570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.251867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.252214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.252241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.252389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.252675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.252691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.253039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.253332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.253349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.253691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.253975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.253991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.254171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.254587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.254603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.254884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.255239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.255255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.255619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.255972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.255988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.256265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.256596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.256612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.256887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.257226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.257243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.695 qpair failed and we were unable to recover it. 00:28:52.695 [2024-05-15 01:31:28.257581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.695 [2024-05-15 01:31:28.257921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.257937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.258320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.258709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.258728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.259104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.259519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.259535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.259833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.260115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.260131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.260539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.260805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.260821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.261210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.261557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.261573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.261959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.262314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.262330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.262631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.262991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.263008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.263447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.263797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.263813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.264165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.264554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.264570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.264931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.265286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.265302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.265643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.265769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.265786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.266077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.266236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.266252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.266517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.266854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.266871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.267217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.267612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.267628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.267980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.268262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.268278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.268634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.268985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.269002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.269289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.269743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.269759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.270165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.270328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.270345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.270770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.271132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.271148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.271578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.271927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.271943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.272293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.272641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.272657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.272945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.273304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.273320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.273587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.273941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.273956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.274309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.274669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.696 [2024-05-15 01:31:28.274685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.696 qpair failed and we were unable to recover it. 00:28:52.696 [2024-05-15 01:31:28.275030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.275204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.275220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.275587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.275953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.275968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.276397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.276732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.276749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.277108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.277461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.277477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.277864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.278200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.278216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.278508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.278721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.278737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.279032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.279408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.279425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.279808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.280136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.280152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.280583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.280915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.280932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.281220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.281572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.281587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.281947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.282298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.282314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.282663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.283090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.283107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.283520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.283858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.283874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.284164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.284524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.284541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.284925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.285256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.285273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.285683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.285959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.285975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.286195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.286533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.286549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.286913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.287257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.287273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.287570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.287926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.287942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.288308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.288591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.288607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.288952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.289363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.289380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.289664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.289955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.289971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.290315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.290667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.290683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.290894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.291171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.291187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.291481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.291905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.291921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.292133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.292565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.292581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.292956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.293241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.293257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.293651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.293854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.293870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.294064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.294346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.294362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.697 [2024-05-15 01:31:28.294741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.295031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.697 [2024-05-15 01:31:28.295047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.697 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.295477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.295837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.295853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.296198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.296479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.296495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.296861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.297208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.297225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.297609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.297981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.297998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.298361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.298642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.298658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.299067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.299419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.299435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.299779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.300065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.300081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.300460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.300802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.300818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.301246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.301388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.301404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.301750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.302097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.302113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.302487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.302894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.302911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.303341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.303611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.303627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.303917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.304204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.304220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.304629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.304904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.304920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.305264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.305685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.305701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.306073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.306361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.306377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.306670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.306960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.306976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.307340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.307562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.307578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.307929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.308078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.308095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.308398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.308794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.308810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.309153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.309573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.309590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.309947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.310302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.310318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.310663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.310999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.311015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.311451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.311740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.311757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.312117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.312478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.312495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.312694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.313115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.313132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.313299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.313644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.313660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f13e4000b90 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.314037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.314352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.314373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.314731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.315060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.315076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.698 qpair failed and we were unable to recover it. 00:28:52.698 [2024-05-15 01:31:28.315534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.698 [2024-05-15 01:31:28.315819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.315836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.316128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.316413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.316430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.316767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.317116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.317132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.317407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.317703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.317719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.318077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.318407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.318425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.318719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.319152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.319169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.319464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.319808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.319824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.320100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.320515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.320531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.320729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.321096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.321112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.321453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.321741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.321757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.322111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.322454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.322471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.322761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.323040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.323056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.323334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.323617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.323633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.323936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.324247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.324264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.324619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.324955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.324972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.325118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.325550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.325567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.325857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.326281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.326298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.326730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.327068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.327084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.327217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.327568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.327587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.327932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.328226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.328243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.328618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.328913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.328930] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.329224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.329458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.329475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.329881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.330181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.330203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.330571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.330932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.330948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.331350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.331687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.331704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.332136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.332420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.332437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.332841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.333199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.333216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.333379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.333807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.333823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.334179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.334539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.334557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.334847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.335206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.335223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.699 qpair failed and we were unable to recover it. 00:28:52.699 [2024-05-15 01:31:28.335590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.699 [2024-05-15 01:31:28.335949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.335965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.336327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.336733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.336750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.337017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.337293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.337310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.337665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.337925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.337941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.338303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.338707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.338724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.339082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.339494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.339510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.339788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.340130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.340146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.340493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.340797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.340814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.341225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.341584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.341601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.341955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.342235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.342252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.342539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.342878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.342894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.343238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.343587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.343604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.343912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.344197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.344214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.344593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.344925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.344941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.345231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.345520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.345537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.345841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.346033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.346049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.346199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.346494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.346510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.346810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.346967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.346983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.347257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.347599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.347615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.347972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.348310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.348328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.348670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.348953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.348969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.349253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.349521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.349538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.349805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.350148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.350165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.350474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.350799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.350816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.351215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.351581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.351597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.351916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.352261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.352279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.352638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.353046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.353062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.353515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.353916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.353932] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.354295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.354629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.354645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.354854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.355212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.355230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.700 qpair failed and we were unable to recover it. 00:28:52.700 [2024-05-15 01:31:28.355606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.700 [2024-05-15 01:31:28.355901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.355917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.356182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.356392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.356408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.356815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.357236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.357253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.357608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.357959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.357975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.358352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.358763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.358780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.359189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.359356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.359372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.359660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.359847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.359864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.360226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.360489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.360506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.360898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.361244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.361261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.361558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.361907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.361927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.362337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.362789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.362806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.363023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.363323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.363339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.363760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.364164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.364180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.364539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.364832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.364848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.365214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.365648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.365665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.366031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.366326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.366343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.366758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.367185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.367206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.367624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.367972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.367988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.368276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.368624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.368641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.369004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.369361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.369378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.369725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.370066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.701 [2024-05-15 01:31:28.370082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.701 qpair failed and we were unable to recover it. 00:28:52.701 [2024-05-15 01:31:28.370371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.370784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.370800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.371073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.371437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.371453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.371812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.372170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.372187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.372626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.372921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.372938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.373230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.373635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.373653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.373954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.374317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.374333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.374712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.375145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.375162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.375518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.375808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.375825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.376184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.376554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.376570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.376981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.377283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.377300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.377594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.377939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.377955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.378296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.378642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.378658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.378949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.379296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.379313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.379685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.379983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.379999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.380285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.380624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.380641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.381005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.381366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.381383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.381726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.382019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.382035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.382389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.382742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.382758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.383122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.383413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.383429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.383821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.384172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.384188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.384495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.384850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.384867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.385162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.385515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.385532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.385825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.386180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.386201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.386566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.386755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.386771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.387122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.387415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.387432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.966 qpair failed and we were unable to recover it. 00:28:52.966 [2024-05-15 01:31:28.387835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.388059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.966 [2024-05-15 01:31:28.388075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.388466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.388799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.388815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.389098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.389439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.389456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.389771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.390181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.390201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.390537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.390696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.390712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.391000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.391339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.391356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.391786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.392006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.392022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.392145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.392513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.392530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.392866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.393131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.393148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:28:52.967 [2024-05-15 01:31:28.393563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:52.967 [2024-05-15 01:31:28.393831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.393848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:52.967 [2024-05-15 01:31:28.394263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.967 [2024-05-15 01:31:28.394542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.394559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.394967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.395344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.395361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.395645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.396002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.396018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.396395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.396800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.396816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.397227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.397664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.397681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.398035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.398266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.398282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.398685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.398817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.398833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.399208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.399609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.399627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.399772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.400127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.400143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.400569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.400734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.400751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.401080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.401421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.401437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.401790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.402071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.402088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.402391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.402738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.402755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.403131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.403552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.403569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.403954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.404360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.404376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.404709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.405152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.405168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.967 qpair failed and we were unable to recover it. 00:28:52.967 [2024-05-15 01:31:28.405477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.405794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.967 [2024-05-15 01:31:28.405812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.406228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.406561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.406577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.406941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.407277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.407295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.407674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.408028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.408046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.408333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.408613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.408631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.408921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.409332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.409349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.409734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.410095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.410111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.410521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.410827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.410844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.411133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.411429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.411445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.411802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.412150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.412167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.412606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.412960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.412977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.413317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.413613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.413630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.413901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.414164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.414180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.414468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.414764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.414780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.415081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.415422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.415439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.415772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.416047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.416063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.416347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.416631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.416647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.417082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.417427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.417446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.417748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.418114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.418130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.418485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.418815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.418831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.419204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.419499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.419515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.419867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.420233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.420250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.420610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.420888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.420905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.421213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.421644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.421661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.422073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.422346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.422363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.422669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.423011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.423028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.423369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.423775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.423791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.424148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.424577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.424593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.424948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.425249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.425266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.425632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.425901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.425917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.968 qpair failed and we were unable to recover it. 00:28:52.968 [2024-05-15 01:31:28.426216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.426494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.968 [2024-05-15 01:31:28.426511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.426793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.426983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.426999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.427131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.427276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.427293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.427586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.427868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.427885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.428185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.428482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.428498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.428854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.429158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.429175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.429513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.429831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.429847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.430222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.430578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.430594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.430906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.431089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.431105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.431397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.431740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.431757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.432037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.432335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.432352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.432637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.432973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.432989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.433291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.433726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.433742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.434049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.434344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.434361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.434701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.435104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.435120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.435478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.435899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.435916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.436294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.436637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.436654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.437022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.437307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.437323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.437680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.969 [2024-05-15 01:31:28.437980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.437998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:52.969 [2024-05-15 01:31:28.438364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.969 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.969 [2024-05-15 01:31:28.438740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.438757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.439096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.439257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.439273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.439564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.439847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.439863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.440148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.440434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.440451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.440752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.441039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.441056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.441402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.441756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.441772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.442113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.442409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.442425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.969 [2024-05-15 01:31:28.442761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.443222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.969 [2024-05-15 01:31:28.443238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.969 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.443534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.443812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.443828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.444117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.444454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.444471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.444812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.445139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.445156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.445534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.445951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.445968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.446408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.446681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.446698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.447008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.447308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.447325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.447599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.447998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.448015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.448274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.448617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.448634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.448985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.449343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.449361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.449773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.450177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.450199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.450528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.450896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.450913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.451211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.451569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.451588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.451870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.452254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.452272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.452645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.452996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.453015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.453168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.453597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.453616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.453890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.454251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.454268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.454551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.454898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.454914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.455262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.455540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.455557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.455897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.456250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.456266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 [2024-05-15 01:31:28.456541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 Malloc0 00:28:52.970 [2024-05-15 01:31:28.456884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.456900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.970 [2024-05-15 01:31:28.457171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:52.970 [2024-05-15 01:31:28.457516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 [2024-05-15 01:31:28.457533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.970 qpair failed and we were unable to recover it. 00:28:52.970 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.970 [2024-05-15 01:31:28.457874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.970 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.970 [2024-05-15 01:31:28.458161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.458177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.458468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.458852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.458868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.459259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.459581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.459597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.460005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.460290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.460306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.460512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.460894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.460910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.461254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.461537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.461553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.461855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.462140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.462156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.462565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.462842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.462859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.463268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.463682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.463698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.463985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.464056] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.971 [2024-05-15 01:31:28.464112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.464127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.464480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.464837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.464853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.465207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.465488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.465505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.465809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.466090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.466106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.466535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.466884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.466900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.467030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.467381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.467398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.467784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.468064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.468080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.468524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.468875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.468891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.469370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.469701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.469718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.470012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.470349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.470366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.470679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.471109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.471125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.471472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.471896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.471913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.472219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.472563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.472579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.971 [2024-05-15 01:31:28.472889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:52.971 [2024-05-15 01:31:28.473175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.473196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.971 [2024-05-15 01:31:28.473558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.971 [2024-05-15 01:31:28.474006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.474023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.474385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.474789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.474805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.475212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.475616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.475632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.971 qpair failed and we were unable to recover it. 00:28:52.971 [2024-05-15 01:31:28.475787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.971 [2024-05-15 01:31:28.476004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.476020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.476364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.476711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.476728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.477011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.477446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.477475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.477827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.478202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.478218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.478656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.479008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.479025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.479322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.479598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.479614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.479989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.480184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.480203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.480558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.972 [2024-05-15 01:31:28.480892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.480908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:52.972 [2024-05-15 01:31:28.481320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.972 [2024-05-15 01:31:28.481668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.481685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.972 [2024-05-15 01:31:28.482118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.482473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.482489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.482916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.483262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.483279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.483637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.483918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.483934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.484344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.484699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.484715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.485068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.485420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.485436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.485715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.486121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.486137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.486442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.486892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.486908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.487318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.487769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.487785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.488220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.488573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.488589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.972 [2024-05-15 01:31:28.488891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:52.972 [2024-05-15 01:31:28.489237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.489254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.972 [2024-05-15 01:31:28.489630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.972 [2024-05-15 01:31:28.489894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.489911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.490317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.490705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.490721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.491020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.491373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.491390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.491827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.492113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.492129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.492498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.492901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.492918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.493224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.493578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.493594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.493954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.494299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.494316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.494669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.495073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.495090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.972 qpair failed and we were unable to recover it. 00:28:52.972 [2024-05-15 01:31:28.495502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.495729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.972 [2024-05-15 01:31:28.495745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.496020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.973 [2024-05-15 01:31:28.496087] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:52.973 [2024-05-15 01:31:28.496338] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.973 [2024-05-15 01:31:28.496448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:52.973 [2024-05-15 01:31:28.496465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f8560 with addr=10.0.0.2, port=4420 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.973 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:52.973 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:52.973 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:52.973 [2024-05-15 01:31:28.504745] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.504907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.504935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.504950] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.504962] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.504992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:52.973 01:31:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 79813 00:28:52.973 [2024-05-15 01:31:28.514635] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.514748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.514768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.514778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.514786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.514806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.524668] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.524787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.524806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.524816] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.524825] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.524844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.534683] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.534848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.534872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.534883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.534892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.534911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.544604] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.544724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.544743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.544753] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.544761] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.544779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.554633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.554750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.554768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.554778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.554786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.554804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.564660] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.564772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.564791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.564801] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.564810] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.564828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.574761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.574883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.574901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.574911] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.574923] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.574941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.584809] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.584926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.584945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.584954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.584963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.584981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.594875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.594991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.595011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.595021] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.595030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.595050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.604832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.604941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.604959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.604968] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.973 [2024-05-15 01:31:28.604977] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.973 [2024-05-15 01:31:28.604995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.973 qpair failed and we were unable to recover it. 00:28:52.973 [2024-05-15 01:31:28.614827] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.973 [2024-05-15 01:31:28.614942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.973 [2024-05-15 01:31:28.614961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.973 [2024-05-15 01:31:28.614970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.974 [2024-05-15 01:31:28.614979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.974 [2024-05-15 01:31:28.614997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.974 qpair failed and we were unable to recover it. 00:28:52.974 [2024-05-15 01:31:28.624838] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.974 [2024-05-15 01:31:28.624954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.974 [2024-05-15 01:31:28.624972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.974 [2024-05-15 01:31:28.624982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.974 [2024-05-15 01:31:28.624991] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.974 [2024-05-15 01:31:28.625009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.974 qpair failed and we were unable to recover it. 00:28:52.974 [2024-05-15 01:31:28.634920] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.974 [2024-05-15 01:31:28.635029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.974 [2024-05-15 01:31:28.635047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.974 [2024-05-15 01:31:28.635056] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.974 [2024-05-15 01:31:28.635065] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.974 [2024-05-15 01:31:28.635083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.974 qpair failed and we were unable to recover it. 00:28:52.974 [2024-05-15 01:31:28.644954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.974 [2024-05-15 01:31:28.645063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.974 [2024-05-15 01:31:28.645081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.974 [2024-05-15 01:31:28.645091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.974 [2024-05-15 01:31:28.645100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:52.974 [2024-05-15 01:31:28.645118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:52.974 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.654896] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.655020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.655038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.655048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.655057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.655075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.664946] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.665053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.665071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.665081] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.665092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.665110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.675048] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.675166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.675184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.675197] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.675206] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.675224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.685075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.685182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.685207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.685217] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.685225] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.685250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.695145] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.695282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.695301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.695311] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.695319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.695338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.705133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.705254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.705272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.705281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.705290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.705307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.715166] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.715285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.715304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.715314] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.715322] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.715341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.235 [2024-05-15 01:31:28.725125] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.235 [2024-05-15 01:31:28.725281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.235 [2024-05-15 01:31:28.725300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.235 [2024-05-15 01:31:28.725309] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.235 [2024-05-15 01:31:28.725318] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.235 [2024-05-15 01:31:28.725336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.235 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.735215] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.735325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.735344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.735353] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.735361] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.735379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.745239] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.745403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.745421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.745431] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.745439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.745458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.755455] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.755572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.755591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.755606] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.755614] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.755632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.765369] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.765475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.765494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.765503] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.765512] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.765530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.775359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.775473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.775492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.775501] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.775510] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.775528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.785341] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.785455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.785473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.785483] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.785492] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.785510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.795409] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.795517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.795536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.795545] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.795554] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.795572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.805428] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.805537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.805556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.805565] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.805573] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.805591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.815448] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.815560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.815579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.815588] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.815597] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.815614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.825476] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.825591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.825610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.825619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.825628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.825646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.835518] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.835634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.835653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.835662] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.835670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.835689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.845536] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.845646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.845664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.845677] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.845685] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.845703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.855540] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.855645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.855663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.855673] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.855681] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.236 [2024-05-15 01:31:28.855699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.236 qpair failed and we were unable to recover it. 00:28:53.236 [2024-05-15 01:31:28.865584] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.236 [2024-05-15 01:31:28.865709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.236 [2024-05-15 01:31:28.865727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.236 [2024-05-15 01:31:28.865737] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.236 [2024-05-15 01:31:28.865745] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.237 [2024-05-15 01:31:28.865763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.237 qpair failed and we were unable to recover it. 00:28:53.237 [2024-05-15 01:31:28.875600] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.237 [2024-05-15 01:31:28.875705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.237 [2024-05-15 01:31:28.875723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.237 [2024-05-15 01:31:28.875733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.237 [2024-05-15 01:31:28.875741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.237 [2024-05-15 01:31:28.875759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.237 qpair failed and we were unable to recover it. 00:28:53.237 [2024-05-15 01:31:28.885652] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.237 [2024-05-15 01:31:28.885812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.237 [2024-05-15 01:31:28.885831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.237 [2024-05-15 01:31:28.885840] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.237 [2024-05-15 01:31:28.885848] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.237 [2024-05-15 01:31:28.885866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.237 qpair failed and we were unable to recover it. 00:28:53.237 [2024-05-15 01:31:28.895674] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.237 [2024-05-15 01:31:28.895793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.237 [2024-05-15 01:31:28.895811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.237 [2024-05-15 01:31:28.895821] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.237 [2024-05-15 01:31:28.895829] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.237 [2024-05-15 01:31:28.895847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.237 qpair failed and we were unable to recover it. 00:28:53.237 [2024-05-15 01:31:28.905685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.237 [2024-05-15 01:31:28.905796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.237 [2024-05-15 01:31:28.905815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.237 [2024-05-15 01:31:28.905824] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.237 [2024-05-15 01:31:28.905832] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.237 [2024-05-15 01:31:28.905850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.237 qpair failed and we were unable to recover it. 00:28:53.237 [2024-05-15 01:31:28.915730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.237 [2024-05-15 01:31:28.915839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.237 [2024-05-15 01:31:28.915857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.237 [2024-05-15 01:31:28.915866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.237 [2024-05-15 01:31:28.915875] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.237 [2024-05-15 01:31:28.915893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.237 qpair failed and we were unable to recover it. 00:28:53.498 [2024-05-15 01:31:28.925733] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.498 [2024-05-15 01:31:28.925842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.498 [2024-05-15 01:31:28.925861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.498 [2024-05-15 01:31:28.925870] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.498 [2024-05-15 01:31:28.925879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.498 [2024-05-15 01:31:28.925897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.498 qpair failed and we were unable to recover it. 00:28:53.498 [2024-05-15 01:31:28.935694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.498 [2024-05-15 01:31:28.935804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.498 [2024-05-15 01:31:28.935822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.498 [2024-05-15 01:31:28.935835] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.498 [2024-05-15 01:31:28.935843] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.498 [2024-05-15 01:31:28.935861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.498 qpair failed and we were unable to recover it. 00:28:53.498 [2024-05-15 01:31:28.945793] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.498 [2024-05-15 01:31:28.945903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.498 [2024-05-15 01:31:28.945921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.498 [2024-05-15 01:31:28.945931] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.498 [2024-05-15 01:31:28.945939] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.498 [2024-05-15 01:31:28.945957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.498 qpair failed and we were unable to recover it. 00:28:53.498 [2024-05-15 01:31:28.955795] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.498 [2024-05-15 01:31:28.955905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.498 [2024-05-15 01:31:28.955923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.498 [2024-05-15 01:31:28.955933] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.498 [2024-05-15 01:31:28.955941] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.498 [2024-05-15 01:31:28.955959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.498 qpair failed and we were unable to recover it. 00:28:53.498 [2024-05-15 01:31:28.965878] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.498 [2024-05-15 01:31:28.965994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.498 [2024-05-15 01:31:28.966012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.498 [2024-05-15 01:31:28.966022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.498 [2024-05-15 01:31:28.966030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.498 [2024-05-15 01:31:28.966048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.498 qpair failed and we were unable to recover it. 00:28:53.498 [2024-05-15 01:31:28.975867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.498 [2024-05-15 01:31:28.975982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:28.976000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:28.976010] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:28.976018] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:28.976036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:28.985898] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:28.986049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:28.986067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:28.986077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:28.986085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:28.986102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:28.995941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:28.996063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:28.996082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:28.996092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:28.996100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:28.996118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.005956] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.006063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.006081] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.006091] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.006100] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.006118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.015996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.016123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.016141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.016151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.016159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.016178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.026035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.026164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.026185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.026199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.026208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.026226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.036039] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.036149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.036167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.036177] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.036186] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.036209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.046091] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.046207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.046226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.046235] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.046244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.046261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.056087] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.056200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.056219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.056228] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.056237] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.056255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.066136] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.066253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.066272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.066281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.066289] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.066308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.076143] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.076257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.076276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.076285] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.076294] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.076312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.086228] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.086338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.086357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.086366] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.086375] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.086392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.096154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.096302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.096321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.096331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.096340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.096358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.106262] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.499 [2024-05-15 01:31:29.106381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.499 [2024-05-15 01:31:29.106399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.499 [2024-05-15 01:31:29.106409] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.499 [2024-05-15 01:31:29.106417] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.499 [2024-05-15 01:31:29.106436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.499 qpair failed and we were unable to recover it. 00:28:53.499 [2024-05-15 01:31:29.116276] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.116383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.116404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.116414] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.116422] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.116441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.126292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.126402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.126420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.126430] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.126439] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.126457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.136337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.136610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.136630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.136640] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.136648] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.136666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.146363] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.146477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.146495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.146504] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.146513] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.146530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.156391] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.156516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.156534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.156543] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.156552] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.156574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.166419] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.166530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.166548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.166557] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.166566] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.166583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.176482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.176603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.176621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.176630] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.176639] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.176656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.500 [2024-05-15 01:31:29.186470] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.500 [2024-05-15 01:31:29.186581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.500 [2024-05-15 01:31:29.186599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.500 [2024-05-15 01:31:29.186608] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.500 [2024-05-15 01:31:29.186617] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.500 [2024-05-15 01:31:29.186634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.500 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.196500] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.196608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.196627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.196636] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.196644] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.761 [2024-05-15 01:31:29.196662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.761 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.206562] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.206681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.206703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.206713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.206721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.761 [2024-05-15 01:31:29.206739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.761 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.216491] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.216604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.216622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.216631] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.216640] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.761 [2024-05-15 01:31:29.216657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.761 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.226583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.226697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.226716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.226725] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.226734] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.761 [2024-05-15 01:31:29.226752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.761 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.236537] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.236650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.236668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.236678] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.236686] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.761 [2024-05-15 01:31:29.236703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.761 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.246651] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.246762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.246780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.246789] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.246797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.761 [2024-05-15 01:31:29.246818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.761 qpair failed and we were unable to recover it. 00:28:53.761 [2024-05-15 01:31:29.256698] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.761 [2024-05-15 01:31:29.256822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.761 [2024-05-15 01:31:29.256840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.761 [2024-05-15 01:31:29.256850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.761 [2024-05-15 01:31:29.256859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.256877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.266697] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.266808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.266826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.266836] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.266845] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.266863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.276730] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.276843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.276862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.276871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.276880] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.276897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.286762] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.286879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.286897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.286907] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.286915] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.286933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.296780] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.296891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.296912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.296922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.296931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.296948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.306792] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.306942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.306961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.306970] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.306979] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.306997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.316849] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.316970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.316988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.316998] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.317006] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.317024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.326873] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.326981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.326999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.327009] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.327017] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.327034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.336899] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.337042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.337060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.337070] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.337081] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.337100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.346883] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.347037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.347056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.347065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.347073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.347091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.356951] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.357056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.357074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.357084] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.357093] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.357111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.366964] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.367073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.367092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.367101] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.762 [2024-05-15 01:31:29.367110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.762 [2024-05-15 01:31:29.367127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.762 qpair failed and we were unable to recover it. 00:28:53.762 [2024-05-15 01:31:29.377015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.762 [2024-05-15 01:31:29.377128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.762 [2024-05-15 01:31:29.377146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.762 [2024-05-15 01:31:29.377156] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.377164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.377182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.387034] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.387147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.387165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.387175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.387183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.387204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.397065] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.397177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.397203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.397213] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.397221] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.397239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.407118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.407253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.407272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.407282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.407290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.407308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.417047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.417175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.417198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.417208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.417216] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.417234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.427154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.427264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.427282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.427292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.427303] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.427321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.437162] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.437269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.437287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.437297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.437306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.437324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:53.763 [2024-05-15 01:31:29.447213] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.763 [2024-05-15 01:31:29.447324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.763 [2024-05-15 01:31:29.447342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.763 [2024-05-15 01:31:29.447351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.763 [2024-05-15 01:31:29.447360] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:53.763 [2024-05-15 01:31:29.447378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.763 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.457268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.457374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.457392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.457401] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.457410] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.457428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.467261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.467380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.467398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.467407] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.467416] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.467433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.477282] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.477397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.477415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.477425] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.477433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.477451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.487359] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.487464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.487481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.487491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.487499] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.487517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.497362] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.497487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.497505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.497515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.497523] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.497541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.507392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.507521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.507539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.507549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.507557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.507575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.517424] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.517553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.517571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.517581] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.517594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.517612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.527443] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.527558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.527576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.527586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.527594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.527613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.537488] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.537600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.537618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.537628] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.537637] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.537654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.547505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.547614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.547632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.547642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.547650] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.547668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.557495] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.557648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.557666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.557675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.557684] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.557701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.567556] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.567668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.567686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.567696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.567704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.567722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.577582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.025 [2024-05-15 01:31:29.577690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.025 [2024-05-15 01:31:29.577708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.025 [2024-05-15 01:31:29.577718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.025 [2024-05-15 01:31:29.577726] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.025 [2024-05-15 01:31:29.577744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.025 qpair failed and we were unable to recover it. 00:28:54.025 [2024-05-15 01:31:29.587602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.587715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.587733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.587742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.587751] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.587769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.597612] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.597758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.597778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.597788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.597796] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.597815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.607637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.607743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.607762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.607774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.607783] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.607801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.617621] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.617769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.617787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.617796] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.617805] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.617822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.627719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.627833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.627851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.627860] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.627868] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.627886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.637757] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.637868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.637886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.637895] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.637904] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.637921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.647775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.647888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.647906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.647915] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.647924] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.647941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.657819] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.657928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.657945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.657955] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.657964] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.657982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.667835] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.667958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.667976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.667986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.667994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.668012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.677892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.678006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.678025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.678034] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.678043] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.678061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.687891] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.688003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.688020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.688030] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.688038] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.688056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.697941] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.698054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.698073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.698085] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.698095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.698114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.026 [2024-05-15 01:31:29.707912] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.026 [2024-05-15 01:31:29.708042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.026 [2024-05-15 01:31:29.708061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.026 [2024-05-15 01:31:29.708071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.026 [2024-05-15 01:31:29.708079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.026 [2024-05-15 01:31:29.708098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.026 qpair failed and we were unable to recover it. 00:28:54.287 [2024-05-15 01:31:29.718012] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.287 [2024-05-15 01:31:29.718139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.287 [2024-05-15 01:31:29.718157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.287 [2024-05-15 01:31:29.718167] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.287 [2024-05-15 01:31:29.718176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.287 [2024-05-15 01:31:29.718201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.287 qpair failed and we were unable to recover it. 00:28:54.287 [2024-05-15 01:31:29.728018] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.728129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.728147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.728157] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.728165] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.728183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.738229] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.738339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.738358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.738367] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.738376] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.738394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.748092] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.748211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.748230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.748240] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.748248] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.748267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.758030] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.758146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.758164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.758174] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.758182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.758207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.768158] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.768275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.768294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.768303] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.768312] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.768330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.778173] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.778293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.778312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.778322] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.778330] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.778349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.788198] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.788313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.788334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.788345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.788354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.788371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.798129] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.798244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.798263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.798273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.798282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.798300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.808217] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.808362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.808381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.808391] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.808399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.808418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.818286] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.818396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.818415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.818424] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.818433] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.818451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.828313] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.828423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.828442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.828451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.828459] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.828478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.838367] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.838481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.838499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.838509] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.838518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.838535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.848347] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.848460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.848478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.848488] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.848497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.848514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.858342] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.288 [2024-05-15 01:31:29.858451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.288 [2024-05-15 01:31:29.858469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.288 [2024-05-15 01:31:29.858478] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.288 [2024-05-15 01:31:29.858487] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.288 [2024-05-15 01:31:29.858505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.288 qpair failed and we were unable to recover it. 00:28:54.288 [2024-05-15 01:31:29.868348] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.868461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.868480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.868489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.868498] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.868516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.878454] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.878566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.878588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.878597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.878606] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.878624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.888478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.888586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.888604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.888613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.888622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.888639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.898441] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.898550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.898568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.898578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.898586] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.898604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.908520] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.908633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.908651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.908661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.908670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.908688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.918566] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.918672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.918691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.918700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.918709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.918729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.928524] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.928632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.928650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.928660] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.928668] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.928687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.938623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.938735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.938753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.938763] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.938772] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.938790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.948666] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.948792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.948810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.948820] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.948828] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.948846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.958653] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.958766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.958784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.958794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.958802] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.958821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.289 [2024-05-15 01:31:29.968853] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.289 [2024-05-15 01:31:29.968981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.289 [2024-05-15 01:31:29.969003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.289 [2024-05-15 01:31:29.969012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.289 [2024-05-15 01:31:29.969020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.289 [2024-05-15 01:31:29.969038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.289 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:29.978738] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:29.978850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:29.978868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:29.978877] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:29.978886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.550 [2024-05-15 01:31:29.978904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.550 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:29.988751] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:29.988863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:29.988881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:29.988890] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:29.988899] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.550 [2024-05-15 01:31:29.988917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.550 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:29.998781] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:29.998891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:29.998909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:29.998918] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:29.998927] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.550 [2024-05-15 01:31:29.998945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.550 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:30.008761] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:30.008871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:30.008890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:30.008900] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:30.008909] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.550 [2024-05-15 01:31:30.008929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.550 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:30.018867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:30.018981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:30.019001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:30.019011] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:30.019020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.550 [2024-05-15 01:31:30.019039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.550 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:30.028817] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:30.028937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:30.028956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:30.028966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:30.028975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.550 [2024-05-15 01:31:30.028993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.550 qpair failed and we were unable to recover it. 00:28:54.550 [2024-05-15 01:31:30.038973] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.550 [2024-05-15 01:31:30.039093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.550 [2024-05-15 01:31:30.039112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.550 [2024-05-15 01:31:30.039122] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.550 [2024-05-15 01:31:30.039130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.039149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.048948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.049059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.049077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.049087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.049096] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.049113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.058892] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.059009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.059031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.059040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.059049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.059067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.068989] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.069099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.069117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.069127] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.069136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.069153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.079011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.079118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.079136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.079146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.079154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.079172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.089033] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.089173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.089197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.089208] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.089218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.089237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.098991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.099100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.099118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.099128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.099140] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.099158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.109015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.109126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.109145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.109155] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.109164] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.109182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.119108] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.119228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.119246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.119255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.119264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.119283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.129082] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.129254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.129273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.129283] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.129291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.129310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.139179] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.139298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.139317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.139327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.139335] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.139353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.149197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.149314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.149332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.149342] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.149350] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.149368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.159163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.159280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.159299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.159308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.159317] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.159335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.169260] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.169371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.169390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.169399] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.169408] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.169426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.179299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.179419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.179437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.179447] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.179456] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.179474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.189311] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.189424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.189442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.189451] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.189463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.189482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.199322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.199434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.199453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.199462] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.199471] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.199489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.209350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.209458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.209476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.209486] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.209495] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.209513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.219563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.219673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.219692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.219701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.219710] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.219728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.229478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.229643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.551 [2024-05-15 01:31:30.229661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.551 [2024-05-15 01:31:30.229670] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.551 [2024-05-15 01:31:30.229679] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.551 [2024-05-15 01:31:30.229697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.551 qpair failed and we were unable to recover it. 00:28:54.551 [2024-05-15 01:31:30.239458] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.551 [2024-05-15 01:31:30.239568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.552 [2024-05-15 01:31:30.239587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.552 [2024-05-15 01:31:30.239596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.552 [2024-05-15 01:31:30.239605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.552 [2024-05-15 01:31:30.239624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.552 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.249511] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.249777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.249796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.249806] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.249814] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.249832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.259552] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.259662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.259680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.259690] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.259698] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.259716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.269480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.269594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.269612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.269622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.269630] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.269648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.279563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.279673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.279692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.279701] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.279715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.279733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.289601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.289704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.289722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.289732] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.289740] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.289758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.299650] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.299760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.299779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.299788] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.299797] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.299815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.309694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.309856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.309875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.309885] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.309893] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.309911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.319741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.319850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.319868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.319878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.319886] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.319904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.329737] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.329856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.329874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.329883] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.329892] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.329910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.339691] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.339955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.339975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.339984] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.339993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.340010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.349805] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.349914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.349932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.349942] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.349950] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.813 [2024-05-15 01:31:30.349968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.813 qpair failed and we were unable to recover it. 00:28:54.813 [2024-05-15 01:31:30.359825] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.813 [2024-05-15 01:31:30.359935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.813 [2024-05-15 01:31:30.359953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.813 [2024-05-15 01:31:30.359963] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.813 [2024-05-15 01:31:30.359971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.359989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.369845] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.369957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.369976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.369988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.369997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.370014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.379882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.380145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.380165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.380175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.380184] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.380207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.389921] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.390205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.390224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.390234] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.390242] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.390262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.399930] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.400041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.400060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.400069] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.400077] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.400096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.409969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.410089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.410107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.410117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.410126] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.410144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.419897] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.420168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.420187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.420201] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.420210] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.420229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.429998] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.430107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.430125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.430135] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.430143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.430161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.439959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.440069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.440087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.440097] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.440105] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.440123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.450043] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.450159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.450178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.450187] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.450200] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.450219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.460106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.460223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.460242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.460255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.460263] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.460281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.470121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.470242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.470261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.470270] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.814 [2024-05-15 01:31:30.470279] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.814 [2024-05-15 01:31:30.470297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.814 qpair failed and we were unable to recover it. 00:28:54.814 [2024-05-15 01:31:30.480151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.814 [2024-05-15 01:31:30.480264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.814 [2024-05-15 01:31:30.480283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.814 [2024-05-15 01:31:30.480292] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.815 [2024-05-15 01:31:30.480301] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.815 [2024-05-15 01:31:30.480318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.815 qpair failed and we were unable to recover it. 00:28:54.815 [2024-05-15 01:31:30.490117] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.815 [2024-05-15 01:31:30.490262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.815 [2024-05-15 01:31:30.490280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.815 [2024-05-15 01:31:30.490289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.815 [2024-05-15 01:31:30.490297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.815 [2024-05-15 01:31:30.490315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.815 qpair failed and we were unable to recover it. 00:28:54.815 [2024-05-15 01:31:30.500200] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:54.815 [2024-05-15 01:31:30.500311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:54.815 [2024-05-15 01:31:30.500330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:54.815 [2024-05-15 01:31:30.500339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:54.815 [2024-05-15 01:31:30.500347] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:54.815 [2024-05-15 01:31:30.500365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:54.815 qpair failed and we were unable to recover it. 00:28:55.082 [2024-05-15 01:31:30.510241] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.082 [2024-05-15 01:31:30.510355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.082 [2024-05-15 01:31:30.510374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.082 [2024-05-15 01:31:30.510383] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.082 [2024-05-15 01:31:30.510392] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.082 [2024-05-15 01:31:30.510410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.082 qpair failed and we were unable to recover it. 00:28:55.082 [2024-05-15 01:31:30.520264] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.082 [2024-05-15 01:31:30.520385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.520403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.520413] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.520421] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.520439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.530289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.530400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.530419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.530429] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.530437] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.530456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.540377] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.540487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.540506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.540515] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.540524] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.540543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.550350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.550459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.550476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.550489] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.550497] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.550515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.560384] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.560496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.560514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.560523] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.560532] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.560549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.570601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.570707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.570725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.570734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.570743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.570761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.580451] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.580571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.580589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.580599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.580607] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.580625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.590461] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.590569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.590587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.590597] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.590605] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.590623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.600400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.600510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.600530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.600540] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.600549] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.600567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.610559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.610679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.610697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.610707] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.610715] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.610733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.620534] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.620642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.620660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.620669] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.620678] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.620696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.630569] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.630682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.630700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.630709] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.630718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.630736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.640609] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.640720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.640741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.640751] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.640759] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.640777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.650633] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.650740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.083 [2024-05-15 01:31:30.650759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.083 [2024-05-15 01:31:30.650768] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.083 [2024-05-15 01:31:30.650777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.083 [2024-05-15 01:31:30.650794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.083 qpair failed and we were unable to recover it. 00:28:55.083 [2024-05-15 01:31:30.660676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.083 [2024-05-15 01:31:30.660823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.660841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.660850] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.660859] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.660876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.670686] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.670799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.670817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.670827] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.670835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.670853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.680721] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.680833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.680851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.680861] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.680869] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.680890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.690755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.690861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.690879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.690888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.690897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.690915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.700790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.700903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.700922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.700932] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.700940] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.700958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.710785] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.710894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.710912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.710922] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.710931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.710948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.720841] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.720949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.720967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.720976] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.720985] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.721002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.730867] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.730975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.730996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.731006] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.731014] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.731032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.740927] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.741037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.741054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.741064] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.741072] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.741090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.750924] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.751037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.751055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.751065] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.751073] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.751092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.084 [2024-05-15 01:31:30.761095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.084 [2024-05-15 01:31:30.761221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.084 [2024-05-15 01:31:30.761239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.084 [2024-05-15 01:31:30.761248] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.084 [2024-05-15 01:31:30.761256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.084 [2024-05-15 01:31:30.761275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.084 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.771051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.771173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.368 [2024-05-15 01:31:30.771195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.368 [2024-05-15 01:31:30.771205] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.368 [2024-05-15 01:31:30.771214] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.368 [2024-05-15 01:31:30.771235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.368 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.781081] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.781198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.368 [2024-05-15 01:31:30.781216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.368 [2024-05-15 01:31:30.781226] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.368 [2024-05-15 01:31:30.781235] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.368 [2024-05-15 01:31:30.781252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.368 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.791127] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.791254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.368 [2024-05-15 01:31:30.791273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.368 [2024-05-15 01:31:30.791282] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.368 [2024-05-15 01:31:30.791291] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.368 [2024-05-15 01:31:30.791309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.368 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.801067] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.801177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.368 [2024-05-15 01:31:30.801200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.368 [2024-05-15 01:31:30.801210] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.368 [2024-05-15 01:31:30.801218] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.368 [2024-05-15 01:31:30.801236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.368 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.811097] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.811209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.368 [2024-05-15 01:31:30.811227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.368 [2024-05-15 01:31:30.811237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.368 [2024-05-15 01:31:30.811246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.368 [2024-05-15 01:31:30.811264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.368 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.821133] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.821250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.368 [2024-05-15 01:31:30.821272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.368 [2024-05-15 01:31:30.821281] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.368 [2024-05-15 01:31:30.821290] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.368 [2024-05-15 01:31:30.821307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.368 qpair failed and we were unable to recover it. 00:28:55.368 [2024-05-15 01:31:30.831170] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.368 [2024-05-15 01:31:30.831300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.831319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.831328] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.831337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.831354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.841197] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.841307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.841325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.841335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.841343] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.841361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.851224] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.851364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.851383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.851392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.851401] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.851419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.861252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.861367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.861386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.861395] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.861404] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.861425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.871249] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.871363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.871381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.871390] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.871399] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.871416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.881296] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.881410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.881428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.881437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.881446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.881464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.891324] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.891435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.891454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.891463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.891472] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.891490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.901389] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.901505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.901523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.901533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.901541] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.901559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.911365] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.911639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.911662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.911672] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.911680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.911699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.921423] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.921534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.921553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.921562] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.921570] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.921589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.931414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.931521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.931540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.931549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.931558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.931576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.941494] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.941606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.941624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.941634] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.941642] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.941661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.951563] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.951771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.951790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.951799] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.951813] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.951831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.961535] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.369 [2024-05-15 01:31:30.961647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.369 [2024-05-15 01:31:30.961665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.369 [2024-05-15 01:31:30.961675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.369 [2024-05-15 01:31:30.961683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.369 [2024-05-15 01:31:30.961702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.369 qpair failed and we were unable to recover it. 00:28:55.369 [2024-05-15 01:31:30.971589] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:30.971701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:30.971719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:30.971728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:30.971737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:30.971755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:30.981603] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:30.981713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:30.981731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:30.981740] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:30.981749] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:30.981767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:30.991625] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:30.991730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:30.991749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:30.991758] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:30.991767] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:30.991785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:31.001648] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:31.001764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:31.001782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:31.001792] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:31.001801] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:31.001819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:31.011678] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:31.011794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:31.011812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:31.011822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:31.011830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:31.011848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:31.021705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:31.021814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:31.021832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:31.021841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:31.021850] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:31.021868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:31.031654] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:31.031764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:31.031782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:31.031791] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:31.031800] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:31.031818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:31.041758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:31.041922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:31.041940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:31.041949] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:31.041961] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:31.041979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.370 [2024-05-15 01:31:31.051701] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.370 [2024-05-15 01:31:31.051974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.370 [2024-05-15 01:31:31.051993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.370 [2024-05-15 01:31:31.052002] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.370 [2024-05-15 01:31:31.052011] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.370 [2024-05-15 01:31:31.052029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.370 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.061797] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.062011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.062030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.062040] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.062049] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.062067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.071758] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.071872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.071890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.071899] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.071908] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.071925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.081863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.081974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.081992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.082001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.082010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.082028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.091884] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.091994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.092012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.092022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.092030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.092048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.101944] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.102088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.102107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.102117] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.102125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.102143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.111962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.112095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.112114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.112123] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.112132] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.112150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.121978] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.122084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.122102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.122112] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.122120] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.122138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.132011] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.132118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.132137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.132150] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.132158] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.132176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.141962] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.142075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.142093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.631 [2024-05-15 01:31:31.142103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.631 [2024-05-15 01:31:31.142112] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.631 [2024-05-15 01:31:31.142129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.631 qpair failed and we were unable to recover it. 00:28:55.631 [2024-05-15 01:31:31.151991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.631 [2024-05-15 01:31:31.152147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.631 [2024-05-15 01:31:31.152165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.152175] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.152183] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.152205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.162144] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.162256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.162274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.162284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.162292] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.162309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.172106] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.172261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.172279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.172289] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.172297] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.172316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.182161] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.182279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.182298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.182307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.182315] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.182333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.192194] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.192306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.192324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.192333] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.192342] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.192360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.202220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.202333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.202352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.202362] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.202370] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.202389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.212248] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.212522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.212542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.212551] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.212560] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.212579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.222292] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.222404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.222422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.222435] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.222444] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.222462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.232275] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.232388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.232407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.232416] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.232424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.232442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.242322] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.242431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.242449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.242459] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.242467] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.242485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.252335] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.252445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.252463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.252472] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.252480] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.252498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.262401] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.262510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.262528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.262538] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.262546] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.262564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.272479] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.272594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.272612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.272622] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.272631] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.272649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.282416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.282683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.632 [2024-05-15 01:31:31.282702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.632 [2024-05-15 01:31:31.282712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.632 [2024-05-15 01:31:31.282720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.632 [2024-05-15 01:31:31.282739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.632 qpair failed and we were unable to recover it. 00:28:55.632 [2024-05-15 01:31:31.292474] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.632 [2024-05-15 01:31:31.292582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.633 [2024-05-15 01:31:31.292600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.633 [2024-05-15 01:31:31.292610] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.633 [2024-05-15 01:31:31.292619] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.633 [2024-05-15 01:31:31.292636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.633 qpair failed and we were unable to recover it. 00:28:55.633 [2024-05-15 01:31:31.302501] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.633 [2024-05-15 01:31:31.302611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.633 [2024-05-15 01:31:31.302629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.633 [2024-05-15 01:31:31.302639] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.633 [2024-05-15 01:31:31.302647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.633 [2024-05-15 01:31:31.302665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.633 qpair failed and we were unable to recover it. 00:28:55.633 [2024-05-15 01:31:31.312548] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.633 [2024-05-15 01:31:31.312680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.633 [2024-05-15 01:31:31.312699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.633 [2024-05-15 01:31:31.312711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.633 [2024-05-15 01:31:31.312720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.633 [2024-05-15 01:31:31.312739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.633 qpair failed and we were unable to recover it. 00:28:55.893 [2024-05-15 01:31:31.322549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.893 [2024-05-15 01:31:31.322705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.893 [2024-05-15 01:31:31.322723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.893 [2024-05-15 01:31:31.322733] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.893 [2024-05-15 01:31:31.322741] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.322759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.332582] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.332726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.332745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.332754] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.332763] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.332781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.342590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.342699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.342717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.342727] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.342736] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.342753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.352631] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.352742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.352760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.352770] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.352778] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.352797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.362598] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.362708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.362726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.362736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.362744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.362762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.372676] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.372786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.372804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.372814] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.372822] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.372840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.382732] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.382843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.382861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.382871] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.382879] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.382897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.392743] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.392858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.392877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.392887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.392895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.392914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.402717] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.402821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.402843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.402852] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.402861] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.402879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.412810] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.412916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.412935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.412944] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.412953] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.412971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.422844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.422953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.422972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.422981] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.422989] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.423006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.432875] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.432984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.433002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.433012] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.433020] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.433038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.442848] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.442958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.442977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.442986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.442995] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.443012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.452954] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.453064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.894 [2024-05-15 01:31:31.453083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.894 [2024-05-15 01:31:31.453092] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.894 [2024-05-15 01:31:31.453101] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.894 [2024-05-15 01:31:31.453119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.894 qpair failed and we were unable to recover it. 00:28:55.894 [2024-05-15 01:31:31.462948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.894 [2024-05-15 01:31:31.463055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.463074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.463083] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.463092] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.463109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.472958] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.473066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.473084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.473094] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.473102] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.473120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.483015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.483123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.483141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.483151] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.483159] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.483177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.493003] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.493121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.493142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.493152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.493160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.493178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.503008] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.503118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.503136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.503146] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.503154] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.503172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.513073] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.513189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.513212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.513221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.513230] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.513248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.523116] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.523226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.523244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.523254] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.523262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.523280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.533185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.533304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.533323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.533332] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.533341] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.533362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.543180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.543298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.543317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.543326] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.543336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.543354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.553204] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.553317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.553335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.553344] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.553353] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.553371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.563202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.563312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.563330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.563339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.563348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.563365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.573265] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.573376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.573394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.573404] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.573412] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.573430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:55.895 [2024-05-15 01:31:31.583314] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:55.895 [2024-05-15 01:31:31.583424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:55.895 [2024-05-15 01:31:31.583445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:55.895 [2024-05-15 01:31:31.583455] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:55.895 [2024-05-15 01:31:31.583463] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:55.895 [2024-05-15 01:31:31.583481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:55.895 qpair failed and we were unable to recover it. 00:28:56.156 [2024-05-15 01:31:31.593353] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.156 [2024-05-15 01:31:31.593468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.156 [2024-05-15 01:31:31.593486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.156 [2024-05-15 01:31:31.593496] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.156 [2024-05-15 01:31:31.593504] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.156 [2024-05-15 01:31:31.593522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.603360] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.603469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.603489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.603499] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.603508] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.603527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.613559] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.613670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.613688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.613698] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.613706] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.613724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.623408] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.623521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.623540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.623549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.623558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.623579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.633521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.633635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.633653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.633663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.633671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.633689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.643434] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.643546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.643564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.643573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.643582] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.643599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.653478] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.653599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.653617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.653626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.653634] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.653652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.663527] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.663639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.663657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.663667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.663675] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.663692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.673483] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.673641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.673662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.673671] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.673680] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.673698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.683577] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.683686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.683704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.683713] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.683721] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.683739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.693583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.693691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.693709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.693718] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.693727] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.693745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.703637] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.703749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.703767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.703777] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.703785] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.703803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.713615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.713770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.713788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.713798] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.713811] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.713830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.723627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.723740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.723759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.723769] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.723777] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.157 [2024-05-15 01:31:31.723795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.157 qpair failed and we were unable to recover it. 00:28:56.157 [2024-05-15 01:31:31.733908] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.157 [2024-05-15 01:31:31.734025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.157 [2024-05-15 01:31:31.734044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.157 [2024-05-15 01:31:31.734053] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.157 [2024-05-15 01:31:31.734062] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.734080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.743862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.743973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.743992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.744001] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.744010] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.744028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.753755] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.753861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.753879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.753888] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.753897] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.753914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.763820] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.763931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.763950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.763959] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.763967] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.763985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.773854] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.773964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.773982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.773991] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.774000] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.774017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.783882] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.783995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.784013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.784022] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.784030] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.784048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.793932] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.794047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.794065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.794074] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.794083] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.794101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.803938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.804049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.804068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.804077] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.804089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.804107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.813974] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.814088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.814107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.814116] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.814125] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.814142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.824015] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.824124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.824143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.824152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.824161] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.824178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.834047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.834164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.834183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.834199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.834208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.834226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.158 [2024-05-15 01:31:31.843976] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.158 [2024-05-15 01:31:31.844244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.158 [2024-05-15 01:31:31.844264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.158 [2024-05-15 01:31:31.844273] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.158 [2024-05-15 01:31:31.844282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.158 [2024-05-15 01:31:31.844300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.158 qpair failed and we were unable to recover it. 00:28:56.421 [2024-05-15 01:31:31.854121] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.421 [2024-05-15 01:31:31.854280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.421 [2024-05-15 01:31:31.854298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.421 [2024-05-15 01:31:31.854308] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.421 [2024-05-15 01:31:31.854316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.421 [2024-05-15 01:31:31.854334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.421 qpair failed and we were unable to recover it. 00:28:56.421 [2024-05-15 01:31:31.864297] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.421 [2024-05-15 01:31:31.864410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.421 [2024-05-15 01:31:31.864428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.421 [2024-05-15 01:31:31.864437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.421 [2024-05-15 01:31:31.864446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.421 [2024-05-15 01:31:31.864464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.421 qpair failed and we were unable to recover it. 00:28:56.421 [2024-05-15 01:31:31.874128] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.421 [2024-05-15 01:31:31.874246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.421 [2024-05-15 01:31:31.874265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.421 [2024-05-15 01:31:31.874274] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.421 [2024-05-15 01:31:31.874282] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.421 [2024-05-15 01:31:31.874301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.421 qpair failed and we were unable to recover it. 00:28:56.421 [2024-05-15 01:31:31.884163] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.421 [2024-05-15 01:31:31.884278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.421 [2024-05-15 01:31:31.884297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.421 [2024-05-15 01:31:31.884307] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.421 [2024-05-15 01:31:31.884316] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.421 [2024-05-15 01:31:31.884335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.421 qpair failed and we were unable to recover it. 00:28:56.421 [2024-05-15 01:31:31.894154] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.421 [2024-05-15 01:31:31.894266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.421 [2024-05-15 01:31:31.894284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.421 [2024-05-15 01:31:31.894294] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.421 [2024-05-15 01:31:31.894305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.421 [2024-05-15 01:31:31.894324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.421 qpair failed and we were unable to recover it. 00:28:56.421 [2024-05-15 01:31:31.904233] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.421 [2024-05-15 01:31:31.904347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.421 [2024-05-15 01:31:31.904365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.904375] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.904383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.904401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.914253] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.914369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.914388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.914397] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.914405] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.914424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.924280] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.924387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.924406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.924415] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.924424] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.924442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.934308] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.934417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.934435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.934445] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.934453] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.934471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.944358] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.944482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.944500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.944510] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.944518] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.944536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.954399] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.954522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.954540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.954550] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.954558] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.954576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.964400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.964671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.964690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.964699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.964708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.964727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.974426] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.974531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.974549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.974559] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.974567] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.974585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.984383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.984497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.984515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.984528] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.984537] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.984555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:31.994481] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:31.994766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:31.994785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:31.994794] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:31.994803] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:31.994822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:32.004551] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:32.004706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:32.004724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:32.004734] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:32.004743] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:32.004761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:32.014462] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:32.014572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:32.014591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:32.014600] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:32.014609] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:32.014627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:32.024574] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:32.024683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:32.024701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:32.024711] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:32.024719] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:32.024737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:32.034590] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:32.034700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:32.034719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:32.034728] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:32.034737] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:32.034755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:32.044626] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:32.044733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.422 [2024-05-15 01:31:32.044751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.422 [2024-05-15 01:31:32.044761] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.422 [2024-05-15 01:31:32.044770] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.422 [2024-05-15 01:31:32.044787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.422 qpair failed and we were unable to recover it. 00:28:56.422 [2024-05-15 01:31:32.054667] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.422 [2024-05-15 01:31:32.054798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.423 [2024-05-15 01:31:32.054817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.423 [2024-05-15 01:31:32.054826] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.423 [2024-05-15 01:31:32.054835] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.423 [2024-05-15 01:31:32.054853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.423 qpair failed and we were unable to recover it. 00:28:56.423 [2024-05-15 01:31:32.064685] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.423 [2024-05-15 01:31:32.064794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.423 [2024-05-15 01:31:32.064812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.423 [2024-05-15 01:31:32.064822] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.423 [2024-05-15 01:31:32.064830] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.423 [2024-05-15 01:31:32.064848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.423 qpair failed and we were unable to recover it. 00:28:56.423 [2024-05-15 01:31:32.074705] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.423 [2024-05-15 01:31:32.074812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.423 [2024-05-15 01:31:32.074830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.423 [2024-05-15 01:31:32.074843] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.423 [2024-05-15 01:31:32.074852] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.423 [2024-05-15 01:31:32.074870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.423 qpair failed and we were unable to recover it. 00:28:56.423 [2024-05-15 01:31:32.084741] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.423 [2024-05-15 01:31:32.084850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.423 [2024-05-15 01:31:32.084868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.423 [2024-05-15 01:31:32.084878] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.423 [2024-05-15 01:31:32.084887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.423 [2024-05-15 01:31:32.084904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.423 qpair failed and we were unable to recover it. 00:28:56.423 [2024-05-15 01:31:32.094769] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.423 [2024-05-15 01:31:32.094885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.423 [2024-05-15 01:31:32.094903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.423 [2024-05-15 01:31:32.094913] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.423 [2024-05-15 01:31:32.094921] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.423 [2024-05-15 01:31:32.094939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.423 qpair failed and we were unable to recover it. 00:28:56.423 [2024-05-15 01:31:32.104801] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.423 [2024-05-15 01:31:32.104910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.423 [2024-05-15 01:31:32.104928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.423 [2024-05-15 01:31:32.104938] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.423 [2024-05-15 01:31:32.104946] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.423 [2024-05-15 01:31:32.104964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.423 qpair failed and we were unable to recover it. 00:28:56.683 [2024-05-15 01:31:32.114822] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.683 [2024-05-15 01:31:32.114934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.683 [2024-05-15 01:31:32.114953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.683 [2024-05-15 01:31:32.114962] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.683 [2024-05-15 01:31:32.114971] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.683 [2024-05-15 01:31:32.114989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.683 qpair failed and we were unable to recover it. 00:28:56.683 [2024-05-15 01:31:32.124859] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.683 [2024-05-15 01:31:32.124966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.683 [2024-05-15 01:31:32.124984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.683 [2024-05-15 01:31:32.124994] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.683 [2024-05-15 01:31:32.125002] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.683 [2024-05-15 01:31:32.125020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.683 qpair failed and we were unable to recover it. 00:28:56.683 [2024-05-15 01:31:32.134913] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.683 [2024-05-15 01:31:32.135020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.683 [2024-05-15 01:31:32.135038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.683 [2024-05-15 01:31:32.135048] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.683 [2024-05-15 01:31:32.135057] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.683 [2024-05-15 01:31:32.135075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.683 qpair failed and we were unable to recover it. 00:28:56.683 [2024-05-15 01:31:32.144936] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.683 [2024-05-15 01:31:32.145043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.683 [2024-05-15 01:31:32.145061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.683 [2024-05-15 01:31:32.145071] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.683 [2024-05-15 01:31:32.145079] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.683 [2024-05-15 01:31:32.145097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.683 qpair failed and we were unable to recover it. 00:28:56.683 [2024-05-15 01:31:32.154957] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.683 [2024-05-15 01:31:32.155071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.683 [2024-05-15 01:31:32.155089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.683 [2024-05-15 01:31:32.155098] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.683 [2024-05-15 01:31:32.155107] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.683 [2024-05-15 01:31:32.155124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.683 qpair failed and we were unable to recover it. 00:28:56.683 [2024-05-15 01:31:32.164938] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.165213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.165236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.165246] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.165255] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.165274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.174990] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.175103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.175121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.175131] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.175139] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.175157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.185021] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.185132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.185150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.185160] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.185168] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.185187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.195052] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.195166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.195184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.195199] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.195208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.195226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.205078] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.205357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.205376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.205385] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.205394] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.205413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.215080] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.215189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.215213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.215223] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.215231] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.215250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.225123] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.225239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.225258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.225268] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.225276] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.225295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.235153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.235269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.235287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.235297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.235305] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.235323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.245189] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.245301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.245319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.245329] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.245337] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.245355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.255207] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.255313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.255336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.255345] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.255354] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.255372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.265236] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.265347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.265365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.265374] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.265383] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.265400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.275288] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.275407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.275425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.275434] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.275443] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.275460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.285289] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.285394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.285412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.285422] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.285430] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.285447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.684 [2024-05-15 01:31:32.295486] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.684 [2024-05-15 01:31:32.295743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.684 [2024-05-15 01:31:32.295763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.684 [2024-05-15 01:31:32.295772] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.684 [2024-05-15 01:31:32.295781] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.684 [2024-05-15 01:31:32.295802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.684 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.305337] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.305479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.305498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.305507] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.305515] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.305534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.315381] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.315646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.315665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.315675] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.315683] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.315701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.325400] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.325508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.325526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.325536] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.325544] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.325562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.335440] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.335551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.335569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.335579] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.335587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.335605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.345480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.345595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.345617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.345626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.345635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.345654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.355498] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.355640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.355658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.355667] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.355676] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.355694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.685 [2024-05-15 01:31:32.365508] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.685 [2024-05-15 01:31:32.365620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.685 [2024-05-15 01:31:32.365638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.685 [2024-05-15 01:31:32.365648] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.685 [2024-05-15 01:31:32.365656] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.685 [2024-05-15 01:31:32.365673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.685 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.375579] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.375688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.375706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.375716] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.375724] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.375742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.385572] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.385685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.385703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.385712] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.385720] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.385741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.395615] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.395729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.395748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.395757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.395766] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.395783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.405557] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.405667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.405686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.405696] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.405704] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.405723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.415661] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.415775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.415793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.415803] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.415812] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.415830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.425703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.425817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.425835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.425845] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.425854] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.425871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.435642] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.435747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.435768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.435778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.435786] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.435805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.445750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.445859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.445877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.445887] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.946 [2024-05-15 01:31:32.445895] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.946 [2024-05-15 01:31:32.445913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.946 qpair failed and we were unable to recover it. 00:28:56.946 [2024-05-15 01:31:32.455766] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.946 [2024-05-15 01:31:32.455885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.946 [2024-05-15 01:31:32.455904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.946 [2024-05-15 01:31:32.455914] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.455922] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.455940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.465740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.465881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.465899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.465908] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.465917] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.465935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.475843] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.475955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.475973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.475982] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.475996] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.476014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.485816] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.485927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.485945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.485954] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.485963] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.485981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.495840] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.495957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.495975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.495985] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.495993] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.496012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.505831] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.505939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.505957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.505966] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.505975] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.505993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.515939] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.516052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.516071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.516080] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.516089] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.516106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.525959] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.526074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.526093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.526102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.526110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.526128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.535991] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.536100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.536118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.536128] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.536136] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.536155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.546036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.546145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.546163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.546173] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.546182] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.546206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.556035] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.556167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.556185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.556200] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.556208] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.556226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.566072] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.566193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.566212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.566221] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.566233] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.566251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.576114] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.576231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.576250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.576259] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.576268] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.576286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.586172] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.586299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.586318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.947 [2024-05-15 01:31:32.586327] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.947 [2024-05-15 01:31:32.586336] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.947 [2024-05-15 01:31:32.586354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.947 qpair failed and we were unable to recover it. 00:28:56.947 [2024-05-15 01:31:32.596095] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.947 [2024-05-15 01:31:32.596208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.947 [2024-05-15 01:31:32.596226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.948 [2024-05-15 01:31:32.596236] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.948 [2024-05-15 01:31:32.596244] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.948 [2024-05-15 01:31:32.596262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.948 qpair failed and we were unable to recover it. 00:28:56.948 [2024-05-15 01:31:32.606202] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.948 [2024-05-15 01:31:32.606310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.948 [2024-05-15 01:31:32.606329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.948 [2024-05-15 01:31:32.606339] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.948 [2024-05-15 01:31:32.606348] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.948 [2024-05-15 01:31:32.606366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.948 qpair failed and we were unable to recover it. 00:28:56.948 [2024-05-15 01:31:32.616220] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.948 [2024-05-15 01:31:32.616348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.948 [2024-05-15 01:31:32.616366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.948 [2024-05-15 01:31:32.616376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.948 [2024-05-15 01:31:32.616384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.948 [2024-05-15 01:31:32.616402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.948 qpair failed and we were unable to recover it. 00:28:56.948 [2024-05-15 01:31:32.626252] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:56.948 [2024-05-15 01:31:32.626360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:56.948 [2024-05-15 01:31:32.626378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:56.948 [2024-05-15 01:31:32.626388] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.948 [2024-05-15 01:31:32.626396] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:56.948 [2024-05-15 01:31:32.626414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:56.948 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.636271] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.636390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.636407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.636417] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.636426] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.636443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.646299] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.646409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.646427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.646437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.646445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.646464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.656328] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.656435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.656453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.656463] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.656474] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.656492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.666435] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.666545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.666563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.666573] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.666581] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.666599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.676392] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.676506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.676524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.676534] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.676543] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.676560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.686416] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.686521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.686539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.686549] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.686557] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.686575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.696453] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.696571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.696590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.696599] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.696608] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.696626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.706482] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.706591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.706610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.706619] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.706628] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.706645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.716505] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.716638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.716656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.716665] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.716673] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.209 [2024-05-15 01:31:32.716691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.209 qpair failed and we were unable to recover it. 00:28:57.209 [2024-05-15 01:31:32.726542] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.209 [2024-05-15 01:31:32.726652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.209 [2024-05-15 01:31:32.726670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.209 [2024-05-15 01:31:32.726680] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.209 [2024-05-15 01:31:32.726688] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.726706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.736554] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.736826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.736846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.736855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.736864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.736881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.746592] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.746705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.746724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.746736] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.746744] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.746762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.756610] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.756720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.756738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.756748] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.756756] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.756773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.766813] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.766968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.766987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.766996] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.767004] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.767023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.776643] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.776753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.776771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.776781] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.776790] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.776807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.786703] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.786814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.786832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.786841] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.786849] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.786867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.796719] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.796837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.796855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.796865] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.796873] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.796891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.806750] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.806855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.806873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.806882] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.806891] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.806908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.816775] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.816884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.816902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.816912] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.816920] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.816939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.826814] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.826923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.826942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.826951] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.826959] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.826977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.836844] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.836955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.836973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.836986] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.836994] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.837012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.846862] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.846986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.847005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.847014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.847022] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.847041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.856872] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.856979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.856997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.857007] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.857015] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.210 [2024-05-15 01:31:32.857033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.210 qpair failed and we were unable to recover it. 00:28:57.210 [2024-05-15 01:31:32.866909] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.210 [2024-05-15 01:31:32.867019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.210 [2024-05-15 01:31:32.867037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.210 [2024-05-15 01:31:32.867046] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.210 [2024-05-15 01:31:32.867055] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.211 [2024-05-15 01:31:32.867072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.211 qpair failed and we were unable to recover it. 00:28:57.211 [2024-05-15 01:31:32.876863] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.211 [2024-05-15 01:31:32.876976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.211 [2024-05-15 01:31:32.876995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.211 [2024-05-15 01:31:32.877004] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.211 [2024-05-15 01:31:32.877012] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.211 [2024-05-15 01:31:32.877030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.211 qpair failed and we were unable to recover it. 00:28:57.211 [2024-05-15 01:31:32.886969] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.211 [2024-05-15 01:31:32.887076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.211 [2024-05-15 01:31:32.887094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.211 [2024-05-15 01:31:32.887103] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.211 [2024-05-15 01:31:32.887111] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.211 [2024-05-15 01:31:32.887129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.211 qpair failed and we were unable to recover it. 00:28:57.211 [2024-05-15 01:31:32.896999] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.211 [2024-05-15 01:31:32.897111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.211 [2024-05-15 01:31:32.897130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.211 [2024-05-15 01:31:32.897139] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.211 [2024-05-15 01:31:32.897148] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.211 [2024-05-15 01:31:32.897166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.211 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.907036] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.907158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.907176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.907186] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.907199] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.907217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.917047] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.917160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.917179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.917188] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.917202] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.917220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.927062] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.927169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.927187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.927207] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.927215] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.927233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.937098] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.937209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.937228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.937237] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.937246] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.937264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.947093] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.947210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.947228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.947238] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.947247] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.947265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.957151] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.957269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.957287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.957297] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.957306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.957324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.967185] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.967303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.967322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.967331] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.967340] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.967358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.977165] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.977281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.977300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.977310] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.977319] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.977337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.987237] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.987348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.987366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.987376] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.987384] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.987402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:32.997250] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:32.997364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:32.997382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:32.997392] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:32.997400] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:32.997418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:33.007214] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:33.007323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:33.007341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:33.007351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:33.007359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:33.007377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:33.017293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:33.017407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:33.017429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.472 [2024-05-15 01:31:33.017439] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.472 [2024-05-15 01:31:33.017447] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.472 [2024-05-15 01:31:33.017465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.472 qpair failed and we were unable to recover it. 00:28:57.472 [2024-05-15 01:31:33.027355] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.472 [2024-05-15 01:31:33.027469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.472 [2024-05-15 01:31:33.027487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.027497] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.027506] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.027523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.037383] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.037506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.037524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.037533] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.037542] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.037560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.047415] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.047526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.047543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.047553] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.047562] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.047580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.057531] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.057672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.057690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.057699] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.057708] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.057729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.067507] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.067622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.067640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.067650] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.067659] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.067676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.077496] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.077611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.077629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.077638] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.077647] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.077665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.087516] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.087789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.087809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.087818] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.087827] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.087845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.097480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.097586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.097604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.097613] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.097622] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.097640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.107558] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.107669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.107691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.107700] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.107709] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.107727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.117583] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.117694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.117713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.117722] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.117731] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.117749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.127549] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.127659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.127677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.127686] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.127695] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.127712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.137623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.137728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.137747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.137757] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.137765] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.137783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.147616] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.147877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.147896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.147905] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.147914] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.147936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.473 [2024-05-15 01:31:33.157695] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.473 [2024-05-15 01:31:33.157805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.473 [2024-05-15 01:31:33.157824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.473 [2024-05-15 01:31:33.157833] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.473 [2024-05-15 01:31:33.157842] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.473 [2024-05-15 01:31:33.157860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.473 qpair failed and we were unable to recover it. 00:28:57.734 [2024-05-15 01:31:33.167700] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.734 [2024-05-15 01:31:33.167972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.734 [2024-05-15 01:31:33.167991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.734 [2024-05-15 01:31:33.168000] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.734 [2024-05-15 01:31:33.168009] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.734 [2024-05-15 01:31:33.168027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.734 qpair failed and we were unable to recover it. 00:28:57.734 [2024-05-15 01:31:33.177694] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.734 [2024-05-15 01:31:33.177814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.177833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.177842] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.177851] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.177869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.187740] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.187851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.187869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.187879] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.187887] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.187905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.197790] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.198069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.198092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.198102] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.198110] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.198128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.207839] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.207952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.207970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.207980] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.207988] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.208007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.217886] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.218033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.218052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.218061] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.218070] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.218088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.227846] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.227959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.227978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.227988] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.227997] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.228014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.237948] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.238059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.238077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.238087] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.238095] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.238118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.247987] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.248107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.248125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.248134] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.248143] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.248161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.257996] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.258108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.258127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.258136] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.258145] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.258163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.267950] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.268076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.268095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.268104] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.268113] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.268130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.278024] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.278140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.278158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.278168] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.278176] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.278198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.288074] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.288183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.288209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.288219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.288228] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.288246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.298109] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.298225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.298244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.298253] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.298262] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.298281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.308051] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.308322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.735 [2024-05-15 01:31:33.308341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.735 [2024-05-15 01:31:33.308351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.735 [2024-05-15 01:31:33.308359] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.735 [2024-05-15 01:31:33.308378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.735 qpair failed and we were unable to recover it. 00:28:57.735 [2024-05-15 01:31:33.318138] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.735 [2024-05-15 01:31:33.318256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.318275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.318284] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.318293] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.318311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.328177] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.328323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.328341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.328351] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.328362] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.328380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.338153] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.338270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.338288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.338298] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.338306] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.338324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.348268] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.348379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.348397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.348406] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.348414] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.348432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.358210] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.358321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.358339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.358348] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.358357] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.358374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.368298] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.368575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.368594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.368603] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.368612] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.368630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.378332] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.378464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.378482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.378491] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.378500] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.378518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.388350] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.388465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.388483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.388492] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.388501] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.388519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.398452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.398564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.398583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.398593] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.398601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.398620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.408437] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.408551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.408569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.408578] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.408587] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.408605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.736 [2024-05-15 01:31:33.418452] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.736 [2024-05-15 01:31:33.418561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.736 [2024-05-15 01:31:33.418580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.736 [2024-05-15 01:31:33.418589] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.736 [2024-05-15 01:31:33.418601] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.736 [2024-05-15 01:31:33.418619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.736 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.428414] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.428527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.428546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.428555] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.428564] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.428582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.438521] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.438633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.438652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.438661] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.438670] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.438687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.448456] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.448568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.448586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.448596] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.448604] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.448622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.458480] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.458635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.458653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.458663] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.458671] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.458689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.468571] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.468682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.468700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.468710] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.468718] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.468736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.478617] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.478746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.478764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.478774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.478782] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.478800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.997 [2024-05-15 01:31:33.488613] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.997 [2024-05-15 01:31:33.488732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.997 [2024-05-15 01:31:33.488750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.997 [2024-05-15 01:31:33.488760] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.997 [2024-05-15 01:31:33.488768] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.997 [2024-05-15 01:31:33.488786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.997 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.498601] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.498717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.498735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.498745] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.498753] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.498771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.508627] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.508736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.508754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.508767] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.508775] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.508792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.518639] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.518752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.518771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.518780] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.518789] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.518806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.528673] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.528782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.528800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.528809] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.528818] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.528835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.538788] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.538895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.538913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.538923] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.538931] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.538949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.548832] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.548944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.548962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.548972] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.548981] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.548999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.558759] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.558869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.558887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.558897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.558905] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.558924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.568877] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.568987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.569005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.569014] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.569023] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.569041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.578915] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.579022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.579040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.579049] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.579058] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.579076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.588937] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.589048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.589067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.589076] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.589085] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.589102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.598960] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.599233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.599252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.599264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.599273] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.599291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.608981] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.609093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.609111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.609121] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.609130] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.609148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.619017] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.619124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.619142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.619152] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.619160] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.619178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.998 [2024-05-15 01:31:33.629103] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.998 [2024-05-15 01:31:33.629220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.998 [2024-05-15 01:31:33.629238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.998 [2024-05-15 01:31:33.629247] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.998 [2024-05-15 01:31:33.629256] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.998 [2024-05-15 01:31:33.629275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.998 qpair failed and we were unable to recover it. 00:28:57.999 [2024-05-15 01:31:33.639075] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.999 [2024-05-15 01:31:33.639186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.999 [2024-05-15 01:31:33.639209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.999 [2024-05-15 01:31:33.639219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.999 [2024-05-15 01:31:33.639227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.999 [2024-05-15 01:31:33.639245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.999 qpair failed and we were unable to recover it. 00:28:57.999 [2024-05-15 01:31:33.649079] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.999 [2024-05-15 01:31:33.649185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.999 [2024-05-15 01:31:33.649209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.999 [2024-05-15 01:31:33.649219] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.999 [2024-05-15 01:31:33.649227] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.999 [2024-05-15 01:31:33.649245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.999 qpair failed and we were unable to recover it. 00:28:57.999 [2024-05-15 01:31:33.659118] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.999 [2024-05-15 01:31:33.659228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.999 [2024-05-15 01:31:33.659246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.999 [2024-05-15 01:31:33.659255] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.999 [2024-05-15 01:31:33.659264] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.999 [2024-05-15 01:31:33.659282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.999 qpair failed and we were unable to recover it. 00:28:57.999 [2024-05-15 01:31:33.669159] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.999 [2024-05-15 01:31:33.669273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.999 [2024-05-15 01:31:33.669291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.999 [2024-05-15 01:31:33.669301] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.999 [2024-05-15 01:31:33.669309] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.999 [2024-05-15 01:31:33.669327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.999 qpair failed and we were unable to recover it. 00:28:57.999 [2024-05-15 01:31:33.679180] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:57.999 [2024-05-15 01:31:33.679308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:57.999 [2024-05-15 01:31:33.679326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:57.999 [2024-05-15 01:31:33.679335] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:57.999 [2024-05-15 01:31:33.679344] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:57.999 [2024-05-15 01:31:33.679362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:57.999 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.689212] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.689324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.689343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.689356] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.689364] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.689382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.699235] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.699337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.699356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.699365] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.699374] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.699393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.709291] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.709403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.709422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.709432] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.709440] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.709459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.719293] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.719408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.719427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.719436] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.719445] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.719462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.729306] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.729410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.729428] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.729437] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.729446] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.729464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.739261] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.739370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.739389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.739398] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.739407] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.739425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.749382] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.749491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.749509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.749519] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.749527] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.749545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.759405] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.759514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.759532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.759541] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.759550] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.759568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.769430] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.769542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.769561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.769570] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.769579] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.769596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.779447] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.779555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.779576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.779586] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.779594] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.779612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.789489] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.789599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.789617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.789626] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.260 [2024-05-15 01:31:33.789635] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.260 [2024-05-15 01:31:33.789653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.260 qpair failed and we were unable to recover it. 00:28:58.260 [2024-05-15 01:31:33.799506] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.260 [2024-05-15 01:31:33.799614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.260 [2024-05-15 01:31:33.799633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.260 [2024-05-15 01:31:33.799642] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.799651] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.261 [2024-05-15 01:31:33.799669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.809547] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.809657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.809675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.809685] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.809693] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21f8560 00:28:58.261 [2024-05-15 01:31:33.809711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.810035] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2206140 is same with the state(5) to be set 00:28:58.261 [2024-05-15 01:31:33.819591] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.819734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.819763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.819778] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.819794] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f13dc000b90 00:28:58.261 [2024-05-15 01:31:33.819824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.829602] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.829713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.829731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.829742] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.829750] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f13dc000b90 00:28:58.261 [2024-05-15 01:31:33.829771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.839623] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.839741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.839763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.839774] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.839784] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f13e8000b90 00:28:58.261 [2024-05-15 01:31:33.839805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.849669] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.849826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.849845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.849855] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.849864] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f13e8000b90 00:28:58.261 [2024-05-15 01:31:33.849884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.859711] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.859853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.859882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.859897] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.859910] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f13e4000b90 00:28:58.261 [2024-05-15 01:31:33.859939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.869720] ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:58.261 [2024-05-15 01:31:33.869837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:58.261 [2024-05-15 01:31:33.869856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:58.261 [2024-05-15 01:31:33.869866] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:58.261 [2024-05-15 01:31:33.869874] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f13e4000b90 00:28:58.261 [2024-05-15 01:31:33.869893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:58.261 qpair failed and we were unable to recover it. 00:28:58.261 [2024-05-15 01:31:33.870177] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2206140 (9): Bad file descriptor 00:28:58.261 Initializing NVMe Controllers 00:28:58.261 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:58.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:58.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:58.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:58.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:58.261 Initialization complete. Launching workers. 00:28:58.261 Starting thread on core 1 00:28:58.261 Starting thread on core 2 00:28:58.261 Starting thread on core 3 00:28:58.261 Starting thread on core 0 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:28:58.261 00:28:58.261 real 0m11.334s 00:28:58.261 user 0m20.092s 00:28:58.261 sys 0m4.883s 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.261 ************************************ 00:28:58.261 END TEST nvmf_target_disconnect_tc2 00:28:58.261 ************************************ 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.261 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.261 rmmod nvme_tcp 00:28:58.521 rmmod nvme_fabrics 00:28:58.521 rmmod nvme_keyring 00:28:58.521 01:31:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 80445 ']' 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 80445 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 80445 ']' 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 80445 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80445 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80445' 00:28:58.521 killing process with pid 80445 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 80445 00:28:58.521 [2024-05-15 01:31:34.060844] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:58.521 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 80445 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:58.781 01:31:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.689 01:31:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:00.689 00:29:00.689 real 0m20.963s 00:29:00.689 user 0m47.738s 00:29:00.689 sys 0m10.581s 00:29:00.689 01:31:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:00.689 01:31:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:00.689 ************************************ 00:29:00.689 END TEST nvmf_target_disconnect 00:29:00.689 ************************************ 00:29:00.949 01:31:36 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:00.949 01:31:36 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.949 01:31:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.949 01:31:36 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:00.949 00:29:00.949 real 22m14.827s 00:29:00.949 user 45m52.972s 00:29:00.949 sys 8m11.733s 00:29:00.949 01:31:36 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:00.949 01:31:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:00.949 ************************************ 00:29:00.949 END TEST nvmf_tcp 00:29:00.949 ************************************ 00:29:00.949 01:31:36 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:29:00.949 01:31:36 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:00.949 01:31:36 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:00.949 01:31:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:00.949 01:31:36 -- common/autotest_common.sh@10 -- # set +x 00:29:00.949 ************************************ 00:29:00.949 START TEST spdkcli_nvmf_tcp 00:29:00.949 ************************************ 00:29:00.949 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:00.949 * Looking for test storage... 00:29:00.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:00.949 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.209 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=82127 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 82127 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 82127 ']' 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:01.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:01.210 01:31:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.210 [2024-05-15 01:31:36.729766] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:29:01.210 [2024-05-15 01:31:36.729821] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82127 ] 00:29:01.210 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.210 [2024-05-15 01:31:36.800393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:01.210 [2024-05-15 01:31:36.870171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.210 [2024-05-15 01:31:36.870175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:02.148 01:31:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:02.148 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:02.148 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:02.148 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:02.148 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:02.148 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:02.148 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:02.148 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:02.148 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:02.148 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:02.148 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:02.148 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:02.148 ' 00:29:04.679 [2024-05-15 01:31:39.932705] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.614 [2024-05-15 01:31:41.108241] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:05.614 [2024-05-15 01:31:41.108671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:08.149 [2024-05-15 01:31:43.271399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:09.528 [2024-05-15 01:31:45.129172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:10.907 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:10.907 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:10.907 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:10.907 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:10.907 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:10.907 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:10.907 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:10.907 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:10.907 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:10.907 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:10.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:10.907 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:11.166 01:31:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:11.425 01:31:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:11.425 01:31:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:11.425 01:31:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:11.425 01:31:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.425 01:31:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.685 01:31:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:11.685 01:31:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:11.685 01:31:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:11.685 01:31:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:11.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:11.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:11.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:11.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:11.685 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:11.685 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:11.685 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:11.685 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:11.685 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:11.685 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:11.685 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:11.685 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:11.685 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:11.685 ' 00:29:17.040 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:17.040 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:17.040 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:17.040 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:17.040 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:17.040 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:17.040 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:17.040 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:17.040 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:17.040 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:17.040 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:17.040 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:17.040 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:17.040 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 82127 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 82127 ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 82127 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82127 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82127' 00:29:17.040 killing process with pid 82127 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 82127 00:29:17.040 [2024-05-15 01:31:52.271205] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 82127 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 82127 ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 82127 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 82127 ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 82127 00:29:17.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (82127) - No such process 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 82127 is not found' 00:29:17.040 Process with pid 82127 is not found 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:17.040 00:29:17.040 real 0m15.940s 00:29:17.040 user 0m32.893s 00:29:17.040 sys 0m0.891s 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:17.040 01:31:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 ************************************ 00:29:17.040 END TEST spdkcli_nvmf_tcp 00:29:17.040 ************************************ 00:29:17.040 01:31:52 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:17.040 01:31:52 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:29:17.040 01:31:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:17.040 01:31:52 -- common/autotest_common.sh@10 -- # set +x 00:29:17.040 ************************************ 00:29:17.040 START TEST nvmf_identify_passthru 00:29:17.040 ************************************ 00:29:17.040 01:31:52 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:17.040 * Looking for test storage... 00:29:17.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:17.040 01:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.040 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.041 01:31:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.041 01:31:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.041 01:31:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:17.041 01:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.041 01:31:52 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.041 01:31:52 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.041 01:31:52 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:17.041 01:31:52 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.041 01:31:52 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.041 01:31:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:17.041 01:31:52 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:17.041 01:31:52 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:17.041 01:31:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:23.615 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:23.615 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:23.615 Found net devices under 0000:af:00.0: cvl_0_0 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:23.615 Found net devices under 0000:af:00.1: cvl_0_1 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:23.615 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:23.616 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:23.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:29:23.875 00:29:23.875 --- 10.0.0.2 ping statistics --- 00:29:23.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.875 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:29:23.875 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:29:24.134 00:29:24.134 --- 10.0.0.1 ping statistics --- 00:29:24.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.134 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:24.134 01:31:59 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:d8:00.0 00:29:24.134 01:31:59 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:d8:00.0 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:24.134 01:31:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:24.134 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.411 01:32:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:29:29.411 01:32:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:29:29.411 01:32:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:29.411 01:32:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:29.411 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=89652 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 89652 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 89652 ']' 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:33.605 01:32:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:33.605 01:32:09 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:33.605 [2024-05-15 01:32:09.282244] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:29:33.605 [2024-05-15 01:32:09.282295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.864 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.865 [2024-05-15 01:32:09.355277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:33.865 [2024-05-15 01:32:09.428850] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.865 [2024-05-15 01:32:09.428891] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.865 [2024-05-15 01:32:09.428900] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.865 [2024-05-15 01:32:09.428907] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.865 [2024-05-15 01:32:09.428914] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.865 [2024-05-15 01:32:09.429012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.865 [2024-05-15 01:32:09.429127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.865 [2024-05-15 01:32:09.429210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.865 [2024-05-15 01:32:09.429212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:29:34.432 01:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 INFO: Log level set to 20 00:29:34.432 INFO: Requests: 00:29:34.432 { 00:29:34.432 "jsonrpc": "2.0", 00:29:34.432 "method": "nvmf_set_config", 00:29:34.432 "id": 1, 00:29:34.432 "params": { 00:29:34.432 "admin_cmd_passthru": { 00:29:34.432 "identify_ctrlr": true 00:29:34.432 } 00:29:34.432 } 00:29:34.432 } 00:29:34.432 00:29:34.432 INFO: response: 00:29:34.432 { 00:29:34.432 "jsonrpc": "2.0", 00:29:34.432 "id": 1, 00:29:34.432 "result": true 00:29:34.432 } 00:29:34.432 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.432 01:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.432 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:34.432 INFO: Setting log level to 20 00:29:34.432 INFO: Setting log level to 20 00:29:34.432 INFO: Log level set to 20 00:29:34.432 INFO: Log level set to 20 00:29:34.432 INFO: Requests: 00:29:34.432 { 00:29:34.432 "jsonrpc": "2.0", 00:29:34.432 "method": "framework_start_init", 00:29:34.432 "id": 1 00:29:34.432 } 00:29:34.432 00:29:34.432 INFO: Requests: 00:29:34.432 { 00:29:34.432 "jsonrpc": "2.0", 00:29:34.432 "method": "framework_start_init", 00:29:34.432 "id": 1 00:29:34.432 } 00:29:34.432 00:29:34.690 [2024-05-15 01:32:10.187687] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:34.690 INFO: response: 00:29:34.690 { 00:29:34.690 "jsonrpc": "2.0", 00:29:34.690 "id": 1, 00:29:34.690 "result": true 00:29:34.690 } 00:29:34.690 00:29:34.690 INFO: response: 00:29:34.690 { 00:29:34.690 "jsonrpc": "2.0", 00:29:34.690 "id": 1, 00:29:34.690 "result": true 00:29:34.690 } 00:29:34.690 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.690 01:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:34.690 INFO: Setting log level to 40 00:29:34.690 INFO: Setting log level to 40 00:29:34.690 INFO: Setting log level to 40 00:29:34.690 [2024-05-15 01:32:10.201058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.690 01:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:34.690 01:32:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:34.690 01:32:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:38.057 Nvme0n1 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:38.057 [2024-05-15 01:32:13.124138] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:38.057 [2024-05-15 01:32:13.124404] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:38.057 [ 00:29:38.057 { 00:29:38.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:38.057 "subtype": "Discovery", 00:29:38.057 "listen_addresses": [], 00:29:38.057 "allow_any_host": true, 00:29:38.057 "hosts": [] 00:29:38.057 }, 00:29:38.057 { 00:29:38.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.057 "subtype": "NVMe", 00:29:38.057 "listen_addresses": [ 00:29:38.057 { 00:29:38.057 "trtype": "TCP", 00:29:38.057 "adrfam": "IPv4", 00:29:38.057 "traddr": "10.0.0.2", 00:29:38.057 "trsvcid": "4420" 00:29:38.057 } 00:29:38.057 ], 00:29:38.057 "allow_any_host": true, 00:29:38.057 "hosts": [], 00:29:38.057 "serial_number": "SPDK00000000000001", 00:29:38.057 "model_number": "SPDK bdev Controller", 00:29:38.057 "max_namespaces": 1, 00:29:38.057 "min_cntlid": 1, 00:29:38.057 "max_cntlid": 65519, 00:29:38.057 "namespaces": [ 00:29:38.057 { 00:29:38.057 "nsid": 1, 00:29:38.057 "bdev_name": "Nvme0n1", 00:29:38.057 "name": "Nvme0n1", 00:29:38.057 "nguid": "B9CC66DADFDE4D19B4F11AC89B83C501", 00:29:38.057 "uuid": "b9cc66da-dfde-4d19-b4f1-1ac89b83c501" 00:29:38.057 } 00:29:38.057 ] 00:29:38.057 } 00:29:38.057 ] 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:38.057 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:38.057 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:38.057 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:38.057 01:32:13 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:38.057 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:38.057 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:38.057 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:38.057 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:38.057 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:38.057 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:38.057 rmmod nvme_tcp 00:29:38.057 rmmod nvme_fabrics 00:29:38.361 rmmod nvme_keyring 00:29:38.361 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:38.361 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:38.361 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:38.361 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 89652 ']' 00:29:38.361 01:32:13 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 89652 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 89652 ']' 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 89652 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89652 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89652' 00:29:38.361 killing process with pid 89652 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 89652 00:29:38.361 [2024-05-15 01:32:13.848315] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:38.361 01:32:13 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 89652 00:29:40.269 01:32:15 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:40.269 01:32:15 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:40.269 01:32:15 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:40.269 01:32:15 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.269 01:32:15 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.269 01:32:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.269 01:32:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:40.269 01:32:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.806 01:32:17 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:42.806 00:29:42.806 real 0m25.334s 00:29:42.806 user 0m33.963s 00:29:42.806 sys 0m6.652s 00:29:42.806 01:32:17 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:42.806 01:32:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:42.806 ************************************ 00:29:42.806 END TEST nvmf_identify_passthru 00:29:42.806 ************************************ 00:29:42.806 01:32:17 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:42.806 01:32:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:42.806 01:32:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:42.806 01:32:17 -- common/autotest_common.sh@10 -- # set +x 00:29:42.806 ************************************ 00:29:42.806 START TEST nvmf_dif 00:29:42.806 ************************************ 00:29:42.806 01:32:17 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:42.806 * Looking for test storage... 00:29:42.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:42.806 01:32:18 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.806 01:32:18 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.806 01:32:18 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.806 01:32:18 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.806 01:32:18 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.806 01:32:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.807 01:32:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.807 01:32:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.807 01:32:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:42.807 01:32:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.807 01:32:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:42.807 01:32:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:42.807 01:32:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:42.807 01:32:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:42.807 01:32:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.807 01:32:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:42.807 01:32:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:42.807 01:32:18 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:42.807 01:32:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:49.381 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:49.381 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:49.381 Found net devices under 0000:af:00.0: cvl_0_0 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:49.381 01:32:24 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:49.381 Found net devices under 0000:af:00.1: cvl_0_1 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:29:49.382 00:29:49.382 --- 10.0.0.2 ping statistics --- 00:29:49.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.382 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:29:49.382 00:29:49.382 --- 10.0.0.1 ping statistics --- 00:29:49.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.382 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:49.382 01:32:24 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:51.920 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:51.920 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:51.920 01:32:27 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:51.920 01:32:27 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=95461 00:29:51.920 01:32:27 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 95461 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 95461 ']' 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:51.920 01:32:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:51.920 [2024-05-15 01:32:27.383557] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:29:51.920 [2024-05-15 01:32:27.383606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.920 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.920 [2024-05-15 01:32:27.457485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.920 [2024-05-15 01:32:27.529946] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.920 [2024-05-15 01:32:27.529988] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.920 [2024-05-15 01:32:27.529998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.920 [2024-05-15 01:32:27.530006] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.920 [2024-05-15 01:32:27.530013] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.920 [2024-05-15 01:32:27.530041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.489 01:32:28 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:52.489 01:32:28 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:29:52.489 01:32:28 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:52.489 01:32:28 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:52.489 01:32:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 01:32:28 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.749 01:32:28 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:52.749 01:32:28 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:52.749 01:32:28 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.749 01:32:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 [2024-05-15 01:32:28.207370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.749 01:32:28 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.749 01:32:28 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:52.749 01:32:28 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:29:52.749 01:32:28 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:52.749 01:32:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 ************************************ 00:29:52.749 START TEST fio_dif_1_default 00:29:52.749 ************************************ 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 bdev_null0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:52.749 [2024-05-15 01:32:28.283515] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:52.749 [2024-05-15 01:32:28.283714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:52.749 { 00:29:52.749 "params": { 00:29:52.749 "name": "Nvme$subsystem", 00:29:52.749 "trtype": "$TEST_TRANSPORT", 00:29:52.749 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.749 "adrfam": "ipv4", 00:29:52.749 "trsvcid": "$NVMF_PORT", 00:29:52.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.749 "hdgst": ${hdgst:-false}, 00:29:52.749 "ddgst": ${ddgst:-false} 00:29:52.749 }, 00:29:52.749 "method": "bdev_nvme_attach_controller" 00:29:52.749 } 00:29:52.749 EOF 00:29:52.749 )") 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.749 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:52.750 "params": { 00:29:52.750 "name": "Nvme0", 00:29:52.750 "trtype": "tcp", 00:29:52.750 "traddr": "10.0.0.2", 00:29:52.750 "adrfam": "ipv4", 00:29:52.750 "trsvcid": "4420", 00:29:52.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:52.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:52.750 "hdgst": false, 00:29:52.750 "ddgst": false 00:29:52.750 }, 00:29:52.750 "method": "bdev_nvme_attach_controller" 00:29:52.750 }' 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:52.750 01:32:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:53.009 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:53.009 fio-3.35 00:29:53.009 Starting 1 thread 00:29:53.009 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.222 00:30:05.222 filename0: (groupid=0, jobs=1): err= 0: pid=95900: Wed May 15 01:32:39 2024 00:30:05.222 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:30:05.222 slat (nsec): min=5678, max=25586, avg=5976.38, stdev=1149.37 00:30:05.222 clat (usec): min=41835, max=43242, avg=42002.18, stdev=139.64 00:30:05.222 lat (usec): min=41841, max=43268, avg=42008.16, stdev=139.91 00:30:05.222 clat percentiles (usec): 00:30:05.222 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:30:05.222 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:30:05.222 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:05.222 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:30:05.222 | 99.99th=[43254] 00:30:05.222 bw ( KiB/s): min= 352, max= 384, per=99.80%, avg=380.63, stdev=10.09, samples=19 00:30:05.222 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:30:05.222 lat (msec) : 50=100.00% 00:30:05.222 cpu : usr=85.27%, sys=14.48%, ctx=11, majf=0, minf=199 00:30:05.222 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.222 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.222 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.222 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:05.222 00:30:05.222 Run status group 0 (all jobs): 00:30:05.222 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10001-10001msec 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.222 00:30:05.222 real 0m11.189s 00:30:05.222 user 0m17.657s 00:30:05.222 sys 0m1.784s 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 ************************************ 00:30:05.222 END TEST fio_dif_1_default 00:30:05.222 ************************************ 00:30:05.222 01:32:39 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:05.222 01:32:39 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:05.222 01:32:39 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 ************************************ 00:30:05.222 START TEST fio_dif_1_multi_subsystems 00:30:05.222 ************************************ 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 bdev_null0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.222 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.222 [2024-05-15 01:32:39.557102] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.223 bdev_null1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.223 { 00:30:05.223 "params": { 00:30:05.223 "name": "Nvme$subsystem", 00:30:05.223 "trtype": "$TEST_TRANSPORT", 00:30:05.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.223 "adrfam": "ipv4", 00:30:05.223 "trsvcid": "$NVMF_PORT", 00:30:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.223 "hdgst": ${hdgst:-false}, 00:30:05.223 "ddgst": ${ddgst:-false} 00:30:05.223 }, 00:30:05.223 "method": "bdev_nvme_attach_controller" 00:30:05.223 } 00:30:05.223 EOF 00:30:05.223 )") 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.223 { 00:30:05.223 "params": { 00:30:05.223 "name": "Nvme$subsystem", 00:30:05.223 "trtype": "$TEST_TRANSPORT", 00:30:05.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.223 "adrfam": "ipv4", 00:30:05.223 "trsvcid": "$NVMF_PORT", 00:30:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.223 "hdgst": ${hdgst:-false}, 00:30:05.223 "ddgst": ${ddgst:-false} 00:30:05.223 }, 00:30:05.223 "method": "bdev_nvme_attach_controller" 00:30:05.223 } 00:30:05.223 EOF 00:30:05.223 )") 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:05.223 "params": { 00:30:05.223 "name": "Nvme0", 00:30:05.223 "trtype": "tcp", 00:30:05.223 "traddr": "10.0.0.2", 00:30:05.223 "adrfam": "ipv4", 00:30:05.223 "trsvcid": "4420", 00:30:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.223 "hdgst": false, 00:30:05.223 "ddgst": false 00:30:05.223 }, 00:30:05.223 "method": "bdev_nvme_attach_controller" 00:30:05.223 },{ 00:30:05.223 "params": { 00:30:05.223 "name": "Nvme1", 00:30:05.223 "trtype": "tcp", 00:30:05.223 "traddr": "10.0.0.2", 00:30:05.223 "adrfam": "ipv4", 00:30:05.223 "trsvcid": "4420", 00:30:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.223 "hdgst": false, 00:30:05.223 "ddgst": false 00:30:05.223 }, 00:30:05.223 "method": "bdev_nvme_attach_controller" 00:30:05.223 }' 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:05.223 01:32:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.223 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:05.223 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:05.223 fio-3.35 00:30:05.223 Starting 2 threads 00:30:05.223 EAL: No free 2048 kB hugepages reported on node 1 00:30:15.206 00:30:15.206 filename0: (groupid=0, jobs=1): err= 0: pid=97982: Wed May 15 01:32:50 2024 00:30:15.206 read: IOPS=184, BW=736KiB/s (754kB/s)(7392KiB/10040msec) 00:30:15.206 slat (nsec): min=5798, max=80771, avg=6961.08, stdev=2617.53 00:30:15.206 clat (usec): min=725, max=43791, avg=21711.37, stdev=20234.22 00:30:15.206 lat (usec): min=731, max=43813, avg=21718.33, stdev=20233.65 00:30:15.206 clat percentiles (usec): 00:30:15.206 | 1.00th=[ 1221], 5.00th=[ 1369], 10.00th=[ 1369], 20.00th=[ 1369], 00:30:15.206 | 30.00th=[ 1385], 40.00th=[ 1401], 50.00th=[41157], 60.00th=[41681], 00:30:15.206 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42730], 00:30:15.206 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:30:15.206 | 99.99th=[43779] 00:30:15.206 bw ( KiB/s): min= 672, max= 768, per=50.10%, avg=737.60, stdev=33.60, samples=20 00:30:15.206 iops : min= 168, max= 192, avg=184.40, stdev= 8.40, samples=20 00:30:15.206 lat (usec) : 750=0.22%, 1000=0.16% 00:30:15.206 lat (msec) : 2=49.40%, 50=50.22% 00:30:15.206 cpu : usr=93.20%, sys=6.55%, ctx=9, majf=0, minf=189 00:30:15.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.206 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:15.206 filename1: (groupid=0, jobs=1): err= 0: pid=97983: Wed May 15 01:32:50 2024 00:30:15.206 read: IOPS=184, BW=737KiB/s (755kB/s)(7376KiB/10004msec) 00:30:15.206 slat (nsec): min=5796, max=29742, avg=6957.12, stdev=2068.89 00:30:15.206 clat (usec): min=1349, max=43873, avg=21680.30, stdev=20261.88 00:30:15.206 lat (usec): min=1355, max=43895, avg=21687.26, stdev=20261.23 00:30:15.206 clat percentiles (usec): 00:30:15.206 | 1.00th=[ 1352], 5.00th=[ 1352], 10.00th=[ 1369], 20.00th=[ 1369], 00:30:15.206 | 30.00th=[ 1369], 40.00th=[ 1385], 50.00th=[41157], 60.00th=[41681], 00:30:15.206 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42730], 95.00th=[42730], 00:30:15.206 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:30:15.206 | 99.99th=[43779] 00:30:15.206 bw ( KiB/s): min= 704, max= 768, per=50.10%, avg=737.68, stdev=32.83, samples=19 00:30:15.206 iops : min= 176, max= 192, avg=184.42, stdev= 8.21, samples=19 00:30:15.206 lat (msec) : 2=49.89%, 50=50.11% 00:30:15.206 cpu : usr=93.43%, sys=6.32%, ctx=21, majf=0, minf=61 00:30:15.206 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:15.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:15.206 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:15.206 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:15.206 00:30:15.206 Run status group 0 (all jobs): 00:30:15.206 READ: bw=1471KiB/s (1506kB/s), 736KiB/s-737KiB/s (754kB/s-755kB/s), io=14.4MiB (15.1MB), run=10004-10040msec 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 00:30:15.467 real 0m11.471s 00:30:15.467 user 0m27.897s 00:30:15.467 sys 0m1.672s 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:15.467 01:32:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 ************************************ 00:30:15.467 END TEST fio_dif_1_multi_subsystems 00:30:15.467 ************************************ 00:30:15.467 01:32:51 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:15.467 01:32:51 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:15.467 01:32:51 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:15.467 01:32:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 ************************************ 00:30:15.467 START TEST fio_dif_rand_params 00:30:15.467 ************************************ 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 bdev_null0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:15.467 [2024-05-15 01:32:51.107563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:15.467 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:15.468 { 00:30:15.468 "params": { 00:30:15.468 "name": "Nvme$subsystem", 00:30:15.468 "trtype": "$TEST_TRANSPORT", 00:30:15.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.468 "adrfam": "ipv4", 00:30:15.468 "trsvcid": "$NVMF_PORT", 00:30:15.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.468 "hdgst": ${hdgst:-false}, 00:30:15.468 "ddgst": ${ddgst:-false} 00:30:15.468 }, 00:30:15.468 "method": "bdev_nvme_attach_controller" 00:30:15.468 } 00:30:15.468 EOF 00:30:15.468 )") 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:15.468 01:32:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:15.468 "params": { 00:30:15.468 "name": "Nvme0", 00:30:15.468 "trtype": "tcp", 00:30:15.468 "traddr": "10.0.0.2", 00:30:15.468 "adrfam": "ipv4", 00:30:15.468 "trsvcid": "4420", 00:30:15.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:15.468 "hdgst": false, 00:30:15.468 "ddgst": false 00:30:15.468 }, 00:30:15.468 "method": "bdev_nvme_attach_controller" 00:30:15.468 }' 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:15.762 01:32:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:16.023 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:16.023 ... 00:30:16.023 fio-3.35 00:30:16.023 Starting 3 threads 00:30:16.023 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.593 00:30:22.593 filename0: (groupid=0, jobs=1): err= 0: pid=99934: Wed May 15 01:32:56 2024 00:30:22.593 read: IOPS=317, BW=39.6MiB/s (41.6MB/s)(198MiB/5003msec) 00:30:22.593 slat (nsec): min=2898, max=15582, avg=8519.32, stdev=2294.67 00:30:22.593 clat (usec): min=4261, max=56566, avg=9450.53, stdev=9982.02 00:30:22.593 lat (usec): min=4268, max=56576, avg=9459.05, stdev=9982.14 00:30:22.593 clat percentiles (usec): 00:30:22.593 | 1.00th=[ 4490], 5.00th=[ 4686], 10.00th=[ 4883], 20.00th=[ 5538], 00:30:22.593 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7046], 60.00th=[ 7439], 00:30:22.593 | 70.00th=[ 7963], 80.00th=[ 8586], 90.00th=[ 9372], 95.00th=[47973], 00:30:22.593 | 99.00th=[49546], 99.50th=[50070], 99.90th=[56361], 99.95th=[56361], 00:30:22.593 | 99.99th=[56361] 00:30:22.593 bw ( KiB/s): min=23552, max=54016, per=42.33%, avg=40550.40, stdev=9062.38, samples=10 00:30:22.593 iops : min= 184, max= 422, avg=316.80, stdev=70.80, samples=10 00:30:22.593 lat (msec) : 10=92.06%, 20=2.08%, 50=5.42%, 100=0.44% 00:30:22.593 cpu : usr=91.26%, sys=8.26%, ctx=8, majf=0, minf=95 00:30:22.593 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.593 issued rwts: total=1586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.593 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:22.593 filename0: (groupid=0, jobs=1): err= 0: pid=99935: Wed May 15 01:32:56 2024 00:30:22.593 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(142MiB/5002msec) 00:30:22.593 slat (nsec): min=3988, max=16807, avg=8635.94, stdev=2438.74 00:30:22.593 clat (usec): min=4197, max=93523, avg=13195.97, stdev=14583.00 00:30:22.593 lat (usec): min=4204, max=93530, avg=13204.60, stdev=14583.25 00:30:22.593 clat percentiles (usec): 00:30:22.593 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 5800], 00:30:22.593 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7832], 60.00th=[ 8717], 00:30:22.593 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[48497], 95.00th=[50070], 00:30:22.593 | 99.00th=[52167], 99.50th=[53740], 99.90th=[93848], 99.95th=[93848], 00:30:22.593 | 99.99th=[93848] 00:30:22.593 bw ( KiB/s): min=15360, max=41216, per=30.28%, avg=29010.40, stdev=8241.94, samples=10 00:30:22.593 iops : min= 120, max= 322, avg=226.60, stdev=64.39, samples=10 00:30:22.593 lat (msec) : 10=72.71%, 20=14.35%, 50=8.10%, 100=4.84% 00:30:22.593 cpu : usr=91.42%, sys=7.98%, ctx=7, majf=0, minf=64 00:30:22.593 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.593 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.593 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:22.593 filename0: (groupid=0, jobs=1): err= 0: pid=99936: Wed May 15 01:32:56 2024 00:30:22.593 read: IOPS=207, BW=25.9MiB/s (27.1MB/s)(130MiB/5028msec) 00:30:22.593 slat (nsec): min=5973, max=25898, avg=9119.02, stdev=2493.97 00:30:22.593 clat (usec): min=4321, max=91411, avg=14474.74, stdev=15148.25 00:30:22.593 lat (usec): min=4328, max=91420, avg=14483.86, stdev=15148.33 00:30:22.593 clat percentiles (usec): 00:30:22.593 | 1.00th=[ 4948], 5.00th=[ 5538], 10.00th=[ 6259], 20.00th=[ 6980], 00:30:22.593 | 30.00th=[ 7308], 40.00th=[ 7767], 50.00th=[ 8356], 60.00th=[ 9372], 00:30:22.593 | 70.00th=[10290], 80.00th=[11207], 90.00th=[48497], 95.00th=[51119], 00:30:22.593 | 99.00th=[52167], 99.50th=[53216], 99.90th=[89654], 99.95th=[91751], 00:30:22.593 | 99.99th=[91751] 00:30:22.593 bw ( KiB/s): min=19200, max=38400, per=27.74%, avg=26572.80, stdev=5838.70, samples=10 00:30:22.593 iops : min= 150, max= 300, avg=207.60, stdev=45.61, samples=10 00:30:22.593 lat (msec) : 10=66.86%, 20=18.44%, 50=8.26%, 100=6.44% 00:30:22.593 cpu : usr=91.53%, sys=7.90%, ctx=9, majf=0, minf=164 00:30:22.593 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.593 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.593 issued rwts: total=1041,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.593 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:22.593 00:30:22.593 Run status group 0 (all jobs): 00:30:22.593 READ: bw=93.6MiB/s (98.1MB/s), 25.9MiB/s-39.6MiB/s (27.1MB/s-41.6MB/s), io=470MiB (493MB), run=5002-5028msec 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 bdev_null0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 [2024-05-15 01:32:57.229003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 bdev_null1 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.593 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.594 bdev_null2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.594 { 00:30:22.594 "params": { 00:30:22.594 "name": "Nvme$subsystem", 00:30:22.594 "trtype": "$TEST_TRANSPORT", 00:30:22.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.594 "adrfam": "ipv4", 00:30:22.594 "trsvcid": "$NVMF_PORT", 00:30:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.594 "hdgst": ${hdgst:-false}, 00:30:22.594 "ddgst": ${ddgst:-false} 00:30:22.594 }, 00:30:22.594 "method": "bdev_nvme_attach_controller" 00:30:22.594 } 00:30:22.594 EOF 00:30:22.594 )") 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.594 { 00:30:22.594 "params": { 00:30:22.594 "name": "Nvme$subsystem", 00:30:22.594 "trtype": "$TEST_TRANSPORT", 00:30:22.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.594 "adrfam": "ipv4", 00:30:22.594 "trsvcid": "$NVMF_PORT", 00:30:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.594 "hdgst": ${hdgst:-false}, 00:30:22.594 "ddgst": ${ddgst:-false} 00:30:22.594 }, 00:30:22.594 "method": "bdev_nvme_attach_controller" 00:30:22.594 } 00:30:22.594 EOF 00:30:22.594 )") 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:22.594 { 00:30:22.594 "params": { 00:30:22.594 "name": "Nvme$subsystem", 00:30:22.594 "trtype": "$TEST_TRANSPORT", 00:30:22.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.594 "adrfam": "ipv4", 00:30:22.594 "trsvcid": "$NVMF_PORT", 00:30:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.594 "hdgst": ${hdgst:-false}, 00:30:22.594 "ddgst": ${ddgst:-false} 00:30:22.594 }, 00:30:22.594 "method": "bdev_nvme_attach_controller" 00:30:22.594 } 00:30:22.594 EOF 00:30:22.594 )") 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:22.594 "params": { 00:30:22.594 "name": "Nvme0", 00:30:22.594 "trtype": "tcp", 00:30:22.594 "traddr": "10.0.0.2", 00:30:22.594 "adrfam": "ipv4", 00:30:22.594 "trsvcid": "4420", 00:30:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:22.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:22.594 "hdgst": false, 00:30:22.594 "ddgst": false 00:30:22.594 }, 00:30:22.594 "method": "bdev_nvme_attach_controller" 00:30:22.594 },{ 00:30:22.594 "params": { 00:30:22.594 "name": "Nvme1", 00:30:22.594 "trtype": "tcp", 00:30:22.594 "traddr": "10.0.0.2", 00:30:22.594 "adrfam": "ipv4", 00:30:22.594 "trsvcid": "4420", 00:30:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:22.594 "hdgst": false, 00:30:22.594 "ddgst": false 00:30:22.594 }, 00:30:22.594 "method": "bdev_nvme_attach_controller" 00:30:22.594 },{ 00:30:22.594 "params": { 00:30:22.594 "name": "Nvme2", 00:30:22.594 "trtype": "tcp", 00:30:22.594 "traddr": "10.0.0.2", 00:30:22.594 "adrfam": "ipv4", 00:30:22.594 "trsvcid": "4420", 00:30:22.594 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:22.594 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:22.594 "hdgst": false, 00:30:22.594 "ddgst": false 00:30:22.594 }, 00:30:22.594 "method": "bdev_nvme_attach_controller" 00:30:22.594 }' 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:22.594 01:32:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:22.594 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:22.594 ... 00:30:22.595 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:22.595 ... 00:30:22.595 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:22.595 ... 00:30:22.595 fio-3.35 00:30:22.595 Starting 24 threads 00:30:22.595 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.804 00:30:34.804 filename0: (groupid=0, jobs=1): err= 0: pid=101217: Wed May 15 01:33:08 2024 00:30:34.804 read: IOPS=659, BW=2637KiB/s (2700kB/s)(25.8MiB/10016msec) 00:30:34.804 slat (nsec): min=3861, max=88850, avg=17645.31, stdev=13211.88 00:30:34.804 clat (usec): min=2490, max=48255, avg=24158.49, stdev=5179.26 00:30:34.804 lat (usec): min=2500, max=48287, avg=24176.14, stdev=5181.71 00:30:34.804 clat percentiles (usec): 00:30:34.804 | 1.00th=[ 3163], 5.00th=[14484], 10.00th=[18482], 20.00th=[23725], 00:30:34.804 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:34.804 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[30278], 00:30:34.804 | 99.00th=[36963], 99.50th=[39584], 99.90th=[43779], 99.95th=[47973], 00:30:34.804 | 99.99th=[48497] 00:30:34.804 bw ( KiB/s): min= 2384, max= 3712, per=4.49%, avg=2638.40, stdev=283.14, samples=20 00:30:34.804 iops : min= 596, max= 928, avg=659.60, stdev=70.78, samples=20 00:30:34.804 lat (msec) : 4=1.94%, 10=1.27%, 20=8.85%, 50=87.94% 00:30:34.804 cpu : usr=97.43%, sys=2.13%, ctx=20, majf=0, minf=108 00:30:34.804 IO depths : 1=1.4%, 2=2.9%, 4=8.6%, 8=73.4%, 16=13.8%, 32=0.0%, >=64=0.0% 00:30:34.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.804 complete : 0=0.0%, 4=90.5%, 8=6.2%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.804 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.804 filename0: (groupid=0, jobs=1): err= 0: pid=101219: Wed May 15 01:33:08 2024 00:30:34.804 read: IOPS=638, BW=2554KiB/s (2615kB/s)(25.0MiB/10015msec) 00:30:34.804 slat (nsec): min=6399, max=73072, avg=15991.33, stdev=8125.66 00:30:34.804 clat (usec): min=2835, max=43225, avg=24957.57, stdev=3593.61 00:30:34.804 lat (usec): min=2843, max=43233, avg=24973.56, stdev=3594.56 00:30:34.804 clat percentiles (usec): 00:30:34.804 | 1.00th=[ 4686], 5.00th=[19792], 10.00th=[23725], 20.00th=[24511], 00:30:34.804 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:30:34.804 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[30278], 00:30:34.804 | 99.00th=[33424], 99.50th=[34866], 99.90th=[41681], 99.95th=[43254], 00:30:34.804 | 99.99th=[43254] 00:30:34.804 bw ( KiB/s): min= 2427, max= 3072, per=4.34%, avg=2550.95, stdev=136.24, samples=20 00:30:34.804 iops : min= 606, max= 768, avg=637.70, stdev=34.10, samples=20 00:30:34.804 lat (msec) : 4=0.50%, 10=1.31%, 20=3.39%, 50=94.79% 00:30:34.805 cpu : usr=96.80%, sys=2.73%, ctx=18, majf=0, minf=54 00:30:34.805 IO depths : 1=2.3%, 2=4.6%, 4=13.3%, 8=69.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:30:34.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 complete : 0=0.0%, 4=90.8%, 8=3.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 issued rwts: total=6394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.805 filename0: (groupid=0, jobs=1): err= 0: pid=101220: Wed May 15 01:33:08 2024 00:30:34.805 read: IOPS=633, BW=2535KiB/s (2596kB/s)(24.8MiB/10019msec) 00:30:34.805 slat (nsec): min=6347, max=72076, avg=13958.42, stdev=8058.31 00:30:34.805 clat (usec): min=5748, max=68830, avg=25141.69, stdev=4822.32 00:30:34.805 lat (usec): min=5756, max=68855, avg=25155.65, stdev=4823.71 00:30:34.805 clat percentiles (usec): 00:30:34.805 | 1.00th=[13566], 5.00th=[16909], 10.00th=[19530], 20.00th=[23987], 00:30:34.805 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:34.805 | 70.00th=[25560], 80.00th=[26084], 90.00th=[29230], 95.00th=[33162], 00:30:34.805 | 99.00th=[39584], 99.50th=[43254], 99.90th=[68682], 99.95th=[68682], 00:30:34.805 | 99.99th=[68682] 00:30:34.805 bw ( KiB/s): min= 2176, max= 2746, per=4.32%, avg=2537.00, stdev=116.87, samples=20 00:30:34.805 iops : min= 544, max= 686, avg=634.20, stdev=29.16, samples=20 00:30:34.805 lat (msec) : 10=0.19%, 20=10.54%, 50=89.02%, 100=0.25% 00:30:34.805 cpu : usr=97.02%, sys=2.54%, ctx=27, majf=0, minf=68 00:30:34.805 IO depths : 1=2.0%, 2=4.1%, 4=12.0%, 8=70.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:34.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 complete : 0=0.0%, 4=90.7%, 8=4.3%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 issued rwts: total=6350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.805 filename0: (groupid=0, jobs=1): err= 0: pid=101221: Wed May 15 01:33:08 2024 00:30:34.805 read: IOPS=632, BW=2528KiB/s (2589kB/s)(24.7MiB/10009msec) 00:30:34.805 slat (nsec): min=6171, max=85614, avg=23847.24, stdev=14934.34 00:30:34.805 clat (usec): min=10380, max=42165, avg=25174.40, stdev=3801.44 00:30:34.805 lat (usec): min=10398, max=42188, avg=25198.25, stdev=3802.37 00:30:34.805 clat percentiles (usec): 00:30:34.805 | 1.00th=[14877], 5.00th=[17695], 10.00th=[21365], 20.00th=[24249], 00:30:34.805 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:34.805 | 70.00th=[25560], 80.00th=[26084], 90.00th=[28181], 95.00th=[32637], 00:30:34.805 | 99.00th=[38536], 99.50th=[40109], 99.90th=[41681], 99.95th=[42206], 00:30:34.805 | 99.99th=[42206] 00:30:34.805 bw ( KiB/s): min= 2432, max= 2816, per=4.30%, avg=2526.00, stdev=96.52, samples=19 00:30:34.805 iops : min= 608, max= 704, avg=631.47, stdev=24.13, samples=19 00:30:34.805 lat (msec) : 20=8.28%, 50=91.72% 00:30:34.805 cpu : usr=97.47%, sys=2.10%, ctx=22, majf=0, minf=71 00:30:34.805 IO depths : 1=1.3%, 2=2.7%, 4=9.9%, 8=73.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:30:34.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 issued rwts: total=6326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.805 filename0: (groupid=0, jobs=1): err= 0: pid=101222: Wed May 15 01:33:08 2024 00:30:34.805 read: IOPS=645, BW=2582KiB/s (2644kB/s)(25.2MiB/10015msec) 00:30:34.805 slat (nsec): min=6365, max=86665, avg=17151.71, stdev=10180.34 00:30:34.805 clat (usec): min=2890, max=45788, avg=24670.97, stdev=3976.86 00:30:34.805 lat (usec): min=2898, max=45802, avg=24688.12, stdev=3978.00 00:30:34.805 clat percentiles (usec): 00:30:34.805 | 1.00th=[ 9372], 5.00th=[16909], 10.00th=[21365], 20.00th=[24249], 00:30:34.805 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:30:34.805 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[30278], 00:30:34.805 | 99.00th=[35390], 99.50th=[38536], 99.90th=[40633], 99.95th=[45876], 00:30:34.805 | 99.99th=[45876] 00:30:34.805 bw ( KiB/s): min= 2432, max= 3072, per=4.40%, avg=2582.90, stdev=162.85, samples=20 00:30:34.805 iops : min= 608, max= 768, avg=645.70, stdev=40.67, samples=20 00:30:34.805 lat (msec) : 4=0.65%, 10=0.51%, 20=7.21%, 50=91.63% 00:30:34.805 cpu : usr=97.10%, sys=2.44%, ctx=28, majf=0, minf=58 00:30:34.805 IO depths : 1=2.3%, 2=4.6%, 4=11.9%, 8=69.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:30:34.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 complete : 0=0.0%, 4=90.9%, 8=4.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 issued rwts: total=6464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.805 filename0: (groupid=0, jobs=1): err= 0: pid=101223: Wed May 15 01:33:08 2024 00:30:34.805 read: IOPS=625, BW=2502KiB/s (2562kB/s)(24.5MiB/10009msec) 00:30:34.805 slat (nsec): min=6357, max=77254, avg=16643.96, stdev=8996.27 00:30:34.805 clat (usec): min=10654, max=49136, avg=25470.05, stdev=3079.13 00:30:34.805 lat (usec): min=10664, max=49170, avg=25486.70, stdev=3079.96 00:30:34.805 clat percentiles (usec): 00:30:34.805 | 1.00th=[16909], 5.00th=[20317], 10.00th=[23725], 20.00th=[24511], 00:30:34.805 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.805 | 70.00th=[25822], 80.00th=[26084], 90.00th=[27657], 95.00th=[32113], 00:30:34.805 | 99.00th=[36963], 99.50th=[38536], 99.90th=[39584], 99.95th=[39584], 00:30:34.805 | 99.99th=[49021] 00:30:34.805 bw ( KiB/s): min= 2384, max= 2560, per=4.26%, avg=2501.16, stdev=61.00, samples=19 00:30:34.805 iops : min= 596, max= 640, avg=625.26, stdev=15.25, samples=19 00:30:34.805 lat (msec) : 20=4.49%, 50=95.51% 00:30:34.805 cpu : usr=97.05%, sys=2.49%, ctx=23, majf=0, minf=46 00:30:34.805 IO depths : 1=1.9%, 2=3.8%, 4=11.5%, 8=71.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:34.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 issued rwts: total=6261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.805 filename0: (groupid=0, jobs=1): err= 0: pid=101224: Wed May 15 01:33:08 2024 00:30:34.805 read: IOPS=593, BW=2373KiB/s (2430kB/s)(23.2MiB/10008msec) 00:30:34.805 slat (nsec): min=6290, max=79009, avg=18379.08, stdev=10831.60 00:30:34.805 clat (usec): min=8660, max=55985, avg=26852.16, stdev=5483.14 00:30:34.805 lat (usec): min=8673, max=56003, avg=26870.54, stdev=5483.56 00:30:34.805 clat percentiles (usec): 00:30:34.805 | 1.00th=[13829], 5.00th=[17957], 10.00th=[22414], 20.00th=[24511], 00:30:34.805 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:30:34.805 | 70.00th=[26870], 80.00th=[30802], 90.00th=[34866], 95.00th=[37487], 00:30:34.805 | 99.00th=[42730], 99.50th=[44303], 99.90th=[48497], 99.95th=[55837], 00:30:34.805 | 99.99th=[55837] 00:30:34.805 bw ( KiB/s): min= 2171, max= 2608, per=4.04%, avg=2373.35, stdev=111.87, samples=20 00:30:34.805 iops : min= 542, max= 652, avg=593.30, stdev=28.04, samples=20 00:30:34.805 lat (msec) : 10=0.29%, 20=6.60%, 50=93.03%, 100=0.08% 00:30:34.805 cpu : usr=97.38%, sys=2.19%, ctx=21, majf=0, minf=55 00:30:34.805 IO depths : 1=1.0%, 2=2.2%, 4=10.2%, 8=73.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:34.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.805 issued rwts: total=5937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.805 filename0: (groupid=0, jobs=1): err= 0: pid=101225: Wed May 15 01:33:08 2024 00:30:34.805 read: IOPS=607, BW=2431KiB/s (2490kB/s)(23.8MiB/10004msec) 00:30:34.805 slat (usec): min=4, max=802, avg=23.46, stdev=18.20 00:30:34.805 clat (usec): min=7482, max=48272, avg=26087.57, stdev=3726.90 00:30:34.806 lat (usec): min=7513, max=48286, avg=26111.02, stdev=3724.12 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[15008], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:30:34.806 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.806 | 70.00th=[25822], 80.00th=[26346], 90.00th=[32113], 95.00th=[34341], 00:30:34.806 | 99.00th=[36963], 99.50th=[39584], 99.90th=[41157], 99.95th=[47973], 00:30:34.806 | 99.99th=[48497] 00:30:34.806 bw ( KiB/s): min= 1920, max= 2688, per=4.10%, avg=2411.95, stdev=188.01, samples=19 00:30:34.806 iops : min= 480, max= 672, avg=602.95, stdev=47.06, samples=19 00:30:34.806 lat (msec) : 10=0.26%, 20=2.25%, 50=97.48% 00:30:34.806 cpu : usr=96.54%, sys=2.20%, ctx=44, majf=0, minf=52 00:30:34.806 IO depths : 1=5.3%, 2=10.7%, 4=22.2%, 8=54.4%, 16=7.4%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=6081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101226: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=612, BW=2450KiB/s (2509kB/s)(23.9MiB/10003msec) 00:30:34.806 slat (nsec): min=6148, max=78399, avg=18709.55, stdev=12886.58 00:30:34.806 clat (usec): min=3509, max=48807, avg=26026.99, stdev=3447.82 00:30:34.806 lat (usec): min=3516, max=48814, avg=26045.70, stdev=3446.50 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[15401], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:30:34.806 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.806 | 70.00th=[25822], 80.00th=[26346], 90.00th=[30540], 95.00th=[33162], 00:30:34.806 | 99.00th=[37487], 99.50th=[39584], 99.90th=[48497], 99.95th=[49021], 00:30:34.806 | 99.99th=[49021] 00:30:34.806 bw ( KiB/s): min= 1920, max= 2560, per=4.15%, avg=2436.11, stdev=166.00, samples=19 00:30:34.806 iops : min= 480, max= 640, avg=609.00, stdev=41.48, samples=19 00:30:34.806 lat (msec) : 4=0.10%, 10=0.26%, 20=1.24%, 50=98.40% 00:30:34.806 cpu : usr=97.38%, sys=2.15%, ctx=13, majf=0, minf=76 00:30:34.806 IO depths : 1=0.5%, 2=1.1%, 4=5.2%, 8=77.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=90.2%, 8=7.0%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=6128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101227: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10004msec) 00:30:34.806 slat (nsec): min=4709, max=81518, avg=17831.74, stdev=10565.53 00:30:34.806 clat (usec): min=3762, max=47201, avg=27808.50, stdev=5739.55 00:30:34.806 lat (usec): min=3769, max=47214, avg=27826.33, stdev=5739.28 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[13042], 5.00th=[18482], 10.00th=[22938], 20.00th=[24511], 00:30:34.806 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[27657], 00:30:34.806 | 70.00th=[30802], 80.00th=[33162], 90.00th=[35390], 95.00th=[38011], 00:30:34.806 | 99.00th=[41157], 99.50th=[42206], 99.90th=[46400], 99.95th=[46924], 00:30:34.806 | 99.99th=[47449] 00:30:34.806 bw ( KiB/s): min= 1920, max= 2592, per=3.86%, avg=2269.84, stdev=187.99, samples=19 00:30:34.806 iops : min= 480, max= 648, avg=567.42, stdev=47.00, samples=19 00:30:34.806 lat (msec) : 4=0.17%, 10=0.33%, 20=5.79%, 50=93.70% 00:30:34.806 cpu : usr=97.41%, sys=2.15%, ctx=18, majf=0, minf=57 00:30:34.806 IO depths : 1=1.0%, 2=2.1%, 4=12.1%, 8=71.9%, 16=13.0%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=91.4%, 8=4.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101228: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=628, BW=2513KiB/s (2574kB/s)(24.6MiB/10008msec) 00:30:34.806 slat (nsec): min=6332, max=78419, avg=19828.12, stdev=10323.23 00:30:34.806 clat (usec): min=11599, max=42388, avg=25317.41, stdev=2785.18 00:30:34.806 lat (usec): min=11612, max=42402, avg=25337.23, stdev=2785.62 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[16712], 5.00th=[20579], 10.00th=[23987], 20.00th=[24511], 00:30:34.806 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25035], 60.00th=[25297], 00:30:34.806 | 70.00th=[25560], 80.00th=[25822], 90.00th=[26608], 95.00th=[31327], 00:30:34.806 | 99.00th=[34866], 99.50th=[36963], 99.90th=[39584], 99.95th=[42206], 00:30:34.806 | 99.99th=[42206] 00:30:34.806 bw ( KiB/s): min= 2304, max= 2784, per=4.27%, avg=2510.89, stdev=99.50, samples=19 00:30:34.806 iops : min= 576, max= 696, avg=627.68, stdev=24.91, samples=19 00:30:34.806 lat (msec) : 20=4.07%, 50=95.93% 00:30:34.806 cpu : usr=97.14%, sys=2.40%, ctx=19, majf=0, minf=46 00:30:34.806 IO depths : 1=3.9%, 2=7.8%, 4=17.8%, 8=61.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=92.2%, 8=2.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=6288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101229: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=607, BW=2428KiB/s (2486kB/s)(23.7MiB/10006msec) 00:30:34.806 slat (nsec): min=6352, max=82578, avg=21091.89, stdev=11983.90 00:30:34.806 clat (usec): min=9384, max=54143, avg=26170.51, stdev=3798.01 00:30:34.806 lat (usec): min=9391, max=54159, avg=26191.60, stdev=3795.97 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[16712], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:30:34.806 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.806 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31851], 95.00th=[34341], 00:30:34.806 | 99.00th=[38011], 99.50th=[40109], 99.90th=[54264], 99.95th=[54264], 00:30:34.806 | 99.99th=[54264] 00:30:34.806 bw ( KiB/s): min= 2048, max= 2560, per=4.10%, avg=2409.21, stdev=178.34, samples=19 00:30:34.806 iops : min= 512, max= 640, avg=602.26, stdev=44.57, samples=19 00:30:34.806 lat (msec) : 10=0.26%, 20=1.66%, 50=97.81%, 100=0.26% 00:30:34.806 cpu : usr=97.31%, sys=2.26%, ctx=13, majf=0, minf=38 00:30:34.806 IO depths : 1=5.4%, 2=10.7%, 4=22.4%, 8=54.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=6074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101230: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=588, BW=2352KiB/s (2409kB/s)(23.0MiB/10003msec) 00:30:34.806 slat (nsec): min=6358, max=78973, avg=18087.86, stdev=10505.86 00:30:34.806 clat (usec): min=7737, max=83913, avg=27105.85, stdev=5841.60 00:30:34.806 lat (usec): min=7746, max=83939, avg=27123.94, stdev=5841.83 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[14877], 5.00th=[18482], 10.00th=[22676], 20.00th=[24511], 00:30:34.806 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:30:34.806 | 70.00th=[27132], 80.00th=[31589], 90.00th=[34866], 95.00th=[36963], 00:30:34.806 | 99.00th=[43254], 99.50th=[44303], 99.90th=[68682], 99.95th=[83362], 00:30:34.806 | 99.99th=[84411] 00:30:34.806 bw ( KiB/s): min= 2096, max= 2536, per=4.00%, avg=2350.63, stdev=117.02, samples=19 00:30:34.806 iops : min= 524, max= 634, avg=587.58, stdev=29.34, samples=19 00:30:34.806 lat (msec) : 10=0.34%, 20=6.60%, 50=92.79%, 100=0.27% 00:30:34.806 cpu : usr=97.41%, sys=2.15%, ctx=16, majf=0, minf=57 00:30:34.806 IO depths : 1=1.0%, 2=2.0%, 4=9.5%, 8=74.2%, 16=13.3%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=90.4%, 8=5.7%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=5882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101231: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=623, BW=2496KiB/s (2556kB/s)(24.4MiB/10001msec) 00:30:34.806 slat (nsec): min=6396, max=72526, avg=15664.49, stdev=9289.73 00:30:34.806 clat (usec): min=11653, max=56434, avg=25516.48, stdev=3396.28 00:30:34.806 lat (usec): min=11661, max=56469, avg=25532.14, stdev=3397.10 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[15926], 5.00th=[20579], 10.00th=[23725], 20.00th=[24511], 00:30:34.806 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25297], 00:30:34.806 | 70.00th=[25822], 80.00th=[26084], 90.00th=[27132], 95.00th=[32637], 00:30:34.806 | 99.00th=[36439], 99.50th=[37487], 99.90th=[46400], 99.95th=[46400], 00:30:34.806 | 99.99th=[56361] 00:30:34.806 bw ( KiB/s): min= 2048, max= 2864, per=4.24%, avg=2492.32, stdev=155.22, samples=19 00:30:34.806 iops : min= 512, max= 716, avg=623.05, stdev=38.80, samples=19 00:30:34.806 lat (msec) : 20=4.58%, 50=95.37%, 100=0.05% 00:30:34.806 cpu : usr=97.15%, sys=2.40%, ctx=14, majf=0, minf=53 00:30:34.806 IO depths : 1=5.0%, 2=10.0%, 4=21.2%, 8=56.1%, 16=7.8%, 32=0.0%, >=64=0.0% 00:30:34.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.806 issued rwts: total=6240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.806 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.806 filename1: (groupid=0, jobs=1): err= 0: pid=101232: Wed May 15 01:33:08 2024 00:30:34.806 read: IOPS=577, BW=2311KiB/s (2366kB/s)(22.6MiB/10010msec) 00:30:34.806 slat (usec): min=6, max=539, avg=24.31, stdev=17.78 00:30:34.806 clat (usec): min=6000, max=49945, avg=27543.56, stdev=5296.04 00:30:34.806 lat (usec): min=6011, max=49953, avg=27567.87, stdev=5293.09 00:30:34.806 clat percentiles (usec): 00:30:34.806 | 1.00th=[15008], 5.00th=[20579], 10.00th=[23987], 20.00th=[24511], 00:30:34.806 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:30:34.806 | 70.00th=[29492], 80.00th=[32113], 90.00th=[34866], 95.00th=[37487], 00:30:34.806 | 99.00th=[42206], 99.50th=[43779], 99.90th=[46400], 99.95th=[47449], 00:30:34.806 | 99.99th=[50070] 00:30:34.806 bw ( KiB/s): min= 1976, max= 2552, per=3.92%, avg=2303.89, stdev=175.44, samples=19 00:30:34.806 iops : min= 494, max= 638, avg=575.89, stdev=43.91, samples=19 00:30:34.806 lat (msec) : 10=0.31%, 20=4.27%, 50=95.42% 00:30:34.806 cpu : usr=94.08%, sys=3.20%, ctx=52, majf=0, minf=40 00:30:34.807 IO depths : 1=1.3%, 2=2.9%, 4=10.4%, 8=72.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.9%, 8=4.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=5783,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename1: (groupid=0, jobs=1): err= 0: pid=101233: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=603, BW=2416KiB/s (2474kB/s)(23.6MiB/10010msec) 00:30:34.807 slat (nsec): min=6322, max=93606, avg=18120.86, stdev=9933.55 00:30:34.807 clat (usec): min=9953, max=57026, avg=26366.37, stdev=4712.25 00:30:34.807 lat (usec): min=9960, max=57043, avg=26384.49, stdev=4712.95 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[14484], 5.00th=[19530], 10.00th=[23725], 20.00th=[24511], 00:30:34.807 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.807 | 70.00th=[26084], 80.00th=[28181], 90.00th=[33162], 95.00th=[35914], 00:30:34.807 | 99.00th=[41681], 99.50th=[43254], 99.90th=[49021], 99.95th=[49546], 00:30:34.807 | 99.99th=[56886] 00:30:34.807 bw ( KiB/s): min= 2043, max= 2560, per=4.11%, avg=2417.16, stdev=116.31, samples=19 00:30:34.807 iops : min= 510, max= 640, avg=604.21, stdev=29.23, samples=19 00:30:34.807 lat (msec) : 10=0.10%, 20=5.29%, 50=94.56%, 100=0.05% 00:30:34.807 cpu : usr=96.96%, sys=2.58%, ctx=19, majf=0, minf=58 00:30:34.807 IO depths : 1=1.5%, 2=3.2%, 4=11.2%, 8=72.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.7%, 8=4.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=6046,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101234: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=600, BW=2401KiB/s (2458kB/s)(23.5MiB/10005msec) 00:30:34.807 slat (nsec): min=6161, max=76984, avg=18276.40, stdev=10546.42 00:30:34.807 clat (usec): min=6504, max=50910, avg=26555.79, stdev=5254.56 00:30:34.807 lat (usec): min=6536, max=50927, avg=26574.07, stdev=5254.91 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[14091], 5.00th=[17433], 10.00th=[22938], 20.00th=[24511], 00:30:34.807 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:30:34.807 | 70.00th=[26346], 80.00th=[29754], 90.00th=[34341], 95.00th=[36963], 00:30:34.807 | 99.00th=[41681], 99.50th=[44303], 99.90th=[51119], 99.95th=[51119], 00:30:34.807 | 99.99th=[51119] 00:30:34.807 bw ( KiB/s): min= 2180, max= 2584, per=4.07%, avg=2390.11, stdev=99.83, samples=19 00:30:34.807 iops : min= 545, max= 646, avg=597.53, stdev=24.96, samples=19 00:30:34.807 lat (msec) : 10=0.30%, 20=6.53%, 50=92.91%, 100=0.27% 00:30:34.807 cpu : usr=97.29%, sys=2.26%, ctx=15, majf=0, minf=52 00:30:34.807 IO depths : 1=0.7%, 2=1.5%, 4=8.4%, 8=75.8%, 16=13.6%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.1%, 8=6.1%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=6005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101235: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=600, BW=2402KiB/s (2460kB/s)(23.5MiB/10003msec) 00:30:34.807 slat (nsec): min=4432, max=75041, avg=18310.04, stdev=12244.07 00:30:34.807 clat (usec): min=3523, max=64373, avg=26550.57, stdev=4869.44 00:30:34.807 lat (usec): min=3530, max=64386, avg=26568.88, stdev=4867.95 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[13042], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:30:34.807 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25297], 60.00th=[25560], 00:30:34.807 | 70.00th=[26084], 80.00th=[26870], 90.00th=[31589], 95.00th=[35914], 00:30:34.807 | 99.00th=[45876], 99.50th=[49021], 99.90th=[64226], 99.95th=[64226], 00:30:34.807 | 99.99th=[64226] 00:30:34.807 bw ( KiB/s): min= 1968, max= 2560, per=4.05%, avg=2382.26, stdev=171.73, samples=19 00:30:34.807 iops : min= 492, max= 640, avg=595.53, stdev=42.92, samples=19 00:30:34.807 lat (msec) : 4=0.12%, 10=0.38%, 20=2.08%, 50=97.15%, 100=0.27% 00:30:34.807 cpu : usr=97.16%, sys=2.39%, ctx=15, majf=0, minf=64 00:30:34.807 IO depths : 1=0.5%, 2=1.2%, 4=5.7%, 8=76.8%, 16=15.9%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.8%, 8=6.2%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=6008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101236: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=612, BW=2450KiB/s (2509kB/s)(24.0MiB/10014msec) 00:30:34.807 slat (nsec): min=6362, max=80571, avg=18411.98, stdev=10175.60 00:30:34.807 clat (usec): min=8231, max=45720, avg=26000.61, stdev=4144.73 00:30:34.807 lat (usec): min=8245, max=45749, avg=26019.02, stdev=4145.56 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[14484], 5.00th=[19268], 10.00th=[23462], 20.00th=[24511], 00:30:34.807 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.807 | 70.00th=[26084], 80.00th=[26608], 90.00th=[31851], 95.00th=[34341], 00:30:34.807 | 99.00th=[39584], 99.50th=[42206], 99.90th=[44303], 99.95th=[45351], 00:30:34.807 | 99.99th=[45876] 00:30:34.807 bw ( KiB/s): min= 2256, max= 2608, per=4.17%, avg=2447.47, stdev=94.12, samples=19 00:30:34.807 iops : min= 564, max= 652, avg=611.79, stdev=23.61, samples=19 00:30:34.807 lat (msec) : 10=0.02%, 20=6.03%, 50=93.95% 00:30:34.807 cpu : usr=97.33%, sys=2.21%, ctx=18, majf=0, minf=36 00:30:34.807 IO depths : 1=1.3%, 2=2.6%, 4=9.2%, 8=74.2%, 16=12.7%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=6134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101238: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=608, BW=2435KiB/s (2493kB/s)(23.8MiB/10014msec) 00:30:34.807 slat (nsec): min=6322, max=78241, avg=18357.84, stdev=10416.66 00:30:34.807 clat (usec): min=8985, max=52054, avg=26174.85, stdev=4965.39 00:30:34.807 lat (usec): min=8996, max=52069, avg=26193.20, stdev=4966.29 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[13829], 5.00th=[17433], 10.00th=[21627], 20.00th=[24249], 00:30:34.807 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.807 | 70.00th=[26084], 80.00th=[28705], 90.00th=[33162], 95.00th=[36439], 00:30:34.807 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:30:34.807 | 99.99th=[52167] 00:30:34.807 bw ( KiB/s): min= 2299, max= 2608, per=4.14%, avg=2432.74, stdev=90.26, samples=19 00:30:34.807 iops : min= 574, max= 652, avg=608.11, stdev=22.64, samples=19 00:30:34.807 lat (msec) : 10=0.03%, 20=7.74%, 50=92.21%, 100=0.02% 00:30:34.807 cpu : usr=97.30%, sys=2.26%, ctx=17, majf=0, minf=42 00:30:34.807 IO depths : 1=0.9%, 2=1.8%, 4=9.5%, 8=74.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101239: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=607, BW=2429KiB/s (2487kB/s)(23.7MiB/10009msec) 00:30:34.807 slat (nsec): min=6355, max=73649, avg=19204.24, stdev=10789.00 00:30:34.807 clat (usec): min=10748, max=48708, avg=26242.08, stdev=4181.91 00:30:34.807 lat (usec): min=10772, max=48735, avg=26261.29, stdev=4181.43 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[14615], 5.00th=[22938], 10.00th=[23987], 20.00th=[24511], 00:30:34.807 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.807 | 70.00th=[26084], 80.00th=[26608], 90.00th=[31851], 95.00th=[35390], 00:30:34.807 | 99.00th=[42730], 99.50th=[44827], 99.90th=[47973], 99.95th=[48497], 00:30:34.807 | 99.99th=[48497] 00:30:34.807 bw ( KiB/s): min= 2176, max= 2576, per=4.13%, avg=2424.32, stdev=112.33, samples=19 00:30:34.807 iops : min= 544, max= 644, avg=606.00, stdev=28.11, samples=19 00:30:34.807 lat (msec) : 20=3.59%, 50=96.41% 00:30:34.807 cpu : usr=97.56%, sys=1.99%, ctx=17, majf=0, minf=39 00:30:34.807 IO depths : 1=0.5%, 2=1.7%, 4=10.3%, 8=73.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=91.8%, 8=3.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=6077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101240: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=588, BW=2355KiB/s (2412kB/s)(23.0MiB/10009msec) 00:30:34.807 slat (nsec): min=6206, max=79061, avg=18408.41, stdev=10388.45 00:30:34.807 clat (usec): min=7882, max=57112, avg=27071.25, stdev=5271.67 00:30:34.807 lat (usec): min=7891, max=57129, avg=27089.66, stdev=5271.91 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[14091], 5.00th=[19530], 10.00th=[23725], 20.00th=[24511], 00:30:34.807 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:30:34.807 | 70.00th=[27132], 80.00th=[30540], 90.00th=[34866], 95.00th=[37487], 00:30:34.807 | 99.00th=[41681], 99.50th=[44303], 99.90th=[49546], 99.95th=[56886], 00:30:34.807 | 99.99th=[56886] 00:30:34.807 bw ( KiB/s): min= 2100, max= 2492, per=4.00%, avg=2351.95, stdev=111.20, samples=20 00:30:34.807 iops : min= 525, max= 623, avg=587.95, stdev=27.85, samples=20 00:30:34.807 lat (msec) : 10=0.19%, 20=5.50%, 50=94.23%, 100=0.08% 00:30:34.807 cpu : usr=97.29%, sys=2.27%, ctx=16, majf=0, minf=46 00:30:34.807 IO depths : 1=0.8%, 2=1.5%, 4=8.9%, 8=75.4%, 16=13.4%, 32=0.0%, >=64=0.0% 00:30:34.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 complete : 0=0.0%, 4=90.3%, 8=5.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.807 issued rwts: total=5893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.807 filename2: (groupid=0, jobs=1): err= 0: pid=101241: Wed May 15 01:33:08 2024 00:30:34.807 read: IOPS=614, BW=2457KiB/s (2516kB/s)(24.0MiB/10003msec) 00:30:34.807 slat (nsec): min=6452, max=80827, avg=21854.81, stdev=11683.78 00:30:34.807 clat (usec): min=2666, max=79303, avg=25926.38, stdev=3846.08 00:30:34.807 lat (usec): min=2678, max=79317, avg=25948.23, stdev=3844.25 00:30:34.807 clat percentiles (usec): 00:30:34.807 | 1.00th=[16909], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:30:34.807 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.807 | 70.00th=[25822], 80.00th=[26346], 90.00th=[29492], 95.00th=[32900], 00:30:34.808 | 99.00th=[36963], 99.50th=[41157], 99.90th=[64226], 99.95th=[64226], 00:30:34.808 | 99.99th=[79168] 00:30:34.808 bw ( KiB/s): min= 1920, max= 2560, per=4.15%, avg=2440.32, stdev=155.37, samples=19 00:30:34.808 iops : min= 480, max= 640, avg=610.05, stdev=38.83, samples=19 00:30:34.808 lat (msec) : 4=0.02%, 10=0.47%, 20=1.50%, 50=97.75%, 100=0.26% 00:30:34.808 cpu : usr=97.32%, sys=2.21%, ctx=16, majf=0, minf=62 00:30:34.808 IO depths : 1=0.7%, 2=1.6%, 4=9.2%, 8=76.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:30:34.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.808 complete : 0=0.0%, 4=89.9%, 8=4.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.808 issued rwts: total=6145,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.808 filename2: (groupid=0, jobs=1): err= 0: pid=101242: Wed May 15 01:33:08 2024 00:30:34.808 read: IOPS=618, BW=2476KiB/s (2535kB/s)(24.2MiB/10014msec) 00:30:34.808 slat (nsec): min=6344, max=87113, avg=20487.78, stdev=11177.43 00:30:34.808 clat (usec): min=12486, max=43531, avg=25693.95, stdev=3583.87 00:30:34.808 lat (usec): min=12527, max=43547, avg=25714.44, stdev=3584.54 00:30:34.808 clat percentiles (usec): 00:30:34.808 | 1.00th=[15401], 5.00th=[19792], 10.00th=[23725], 20.00th=[24511], 00:30:34.808 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25560], 00:30:34.808 | 70.00th=[25822], 80.00th=[26346], 90.00th=[31065], 95.00th=[32900], 00:30:34.808 | 99.00th=[36963], 99.50th=[39060], 99.90th=[41681], 99.95th=[43254], 00:30:34.808 | 99.99th=[43779] 00:30:34.808 bw ( KiB/s): min= 2171, max= 2832, per=4.21%, avg=2473.00, stdev=153.48, samples=19 00:30:34.808 iops : min= 542, max= 708, avg=618.21, stdev=38.45, samples=19 00:30:34.808 lat (msec) : 20=5.15%, 50=94.85% 00:30:34.808 cpu : usr=97.21%, sys=2.32%, ctx=18, majf=0, minf=33 00:30:34.808 IO depths : 1=3.0%, 2=6.0%, 4=15.0%, 8=66.1%, 16=10.0%, 32=0.0%, >=64=0.0% 00:30:34.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.808 complete : 0=0.0%, 4=91.5%, 8=3.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:34.808 issued rwts: total=6198,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:34.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:34.808 00:30:34.808 Run status group 0 (all jobs): 00:30:34.808 READ: bw=57.4MiB/s (60.2MB/s), 2293KiB/s-2637KiB/s (2348kB/s-2700kB/s), io=575MiB (603MB), run=10001-10019msec 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 bdev_null0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 [2024-05-15 01:33:08.986891] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 bdev_null1 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.808 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.808 { 00:30:34.808 "params": { 00:30:34.808 "name": "Nvme$subsystem", 00:30:34.808 "trtype": "$TEST_TRANSPORT", 00:30:34.808 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.808 "adrfam": "ipv4", 00:30:34.808 "trsvcid": "$NVMF_PORT", 00:30:34.808 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.808 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.808 "hdgst": ${hdgst:-false}, 00:30:34.808 "ddgst": ${ddgst:-false} 00:30:34.808 }, 00:30:34.808 "method": "bdev_nvme_attach_controller" 00:30:34.808 } 00:30:34.808 EOF 00:30:34.808 )") 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:34.809 { 00:30:34.809 "params": { 00:30:34.809 "name": "Nvme$subsystem", 00:30:34.809 "trtype": "$TEST_TRANSPORT", 00:30:34.809 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:34.809 "adrfam": "ipv4", 00:30:34.809 "trsvcid": "$NVMF_PORT", 00:30:34.809 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:34.809 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:34.809 "hdgst": ${hdgst:-false}, 00:30:34.809 "ddgst": ${ddgst:-false} 00:30:34.809 }, 00:30:34.809 "method": "bdev_nvme_attach_controller" 00:30:34.809 } 00:30:34.809 EOF 00:30:34.809 )") 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:34.809 "params": { 00:30:34.809 "name": "Nvme0", 00:30:34.809 "trtype": "tcp", 00:30:34.809 "traddr": "10.0.0.2", 00:30:34.809 "adrfam": "ipv4", 00:30:34.809 "trsvcid": "4420", 00:30:34.809 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:34.809 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:34.809 "hdgst": false, 00:30:34.809 "ddgst": false 00:30:34.809 }, 00:30:34.809 "method": "bdev_nvme_attach_controller" 00:30:34.809 },{ 00:30:34.809 "params": { 00:30:34.809 "name": "Nvme1", 00:30:34.809 "trtype": "tcp", 00:30:34.809 "traddr": "10.0.0.2", 00:30:34.809 "adrfam": "ipv4", 00:30:34.809 "trsvcid": "4420", 00:30:34.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:34.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:34.809 "hdgst": false, 00:30:34.809 "ddgst": false 00:30:34.809 }, 00:30:34.809 "method": "bdev_nvme_attach_controller" 00:30:34.809 }' 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:34.809 01:33:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:34.809 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:34.809 ... 00:30:34.809 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:34.809 ... 00:30:34.809 fio-3.35 00:30:34.809 Starting 4 threads 00:30:34.809 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.073 00:30:40.073 filename0: (groupid=0, jobs=1): err= 0: pid=103764: Wed May 15 01:33:15 2024 00:30:40.073 read: IOPS=2667, BW=20.8MiB/s (21.9MB/s)(104MiB/5001msec) 00:30:40.073 slat (nsec): min=5814, max=50008, avg=8600.72, stdev=3111.79 00:30:40.073 clat (usec): min=1800, max=45963, avg=2977.56, stdev=1119.43 00:30:40.073 lat (usec): min=1807, max=45981, avg=2986.16, stdev=1119.40 00:30:40.073 clat percentiles (usec): 00:30:40.073 | 1.00th=[ 2114], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2671], 00:30:40.073 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:30:40.073 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3654], 00:30:40.073 | 99.00th=[ 3982], 99.50th=[ 4113], 99.90th=[ 4490], 99.95th=[45876], 00:30:40.073 | 99.99th=[45876] 00:30:40.073 bw ( KiB/s): min=19376, max=21936, per=24.67%, avg=21303.11, stdev=800.84, samples=9 00:30:40.073 iops : min= 2422, max= 2742, avg=2662.89, stdev=100.11, samples=9 00:30:40.073 lat (msec) : 2=0.30%, 4=98.71%, 10=0.93%, 50=0.06% 00:30:40.073 cpu : usr=93.06%, sys=6.58%, ctx=7, majf=0, minf=9 00:30:40.073 IO depths : 1=0.1%, 2=1.0%, 4=65.9%, 8=33.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 complete : 0=0.0%, 4=96.4%, 8=3.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 issued rwts: total=13341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.073 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:40.073 filename0: (groupid=0, jobs=1): err= 0: pid=103765: Wed May 15 01:33:15 2024 00:30:40.073 read: IOPS=2723, BW=21.3MiB/s (22.3MB/s)(106MiB/5003msec) 00:30:40.073 slat (nsec): min=5877, max=60165, avg=9133.20, stdev=4170.25 00:30:40.073 clat (usec): min=1664, max=46300, avg=2914.03, stdev=1113.49 00:30:40.073 lat (usec): min=1671, max=46329, avg=2923.16, stdev=1113.60 00:30:40.073 clat percentiles (usec): 00:30:40.073 | 1.00th=[ 2024], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2606], 00:30:40.073 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:30:40.073 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3326], 95.00th=[ 3523], 00:30:40.073 | 99.00th=[ 3851], 99.50th=[ 3982], 99.90th=[ 4359], 99.95th=[46400], 00:30:40.073 | 99.99th=[46400] 00:30:40.073 bw ( KiB/s): min=20160, max=22368, per=25.31%, avg=21854.22, stdev=683.03, samples=9 00:30:40.073 iops : min= 2520, max= 2796, avg=2731.78, stdev=85.38, samples=9 00:30:40.073 lat (msec) : 2=0.75%, 4=98.77%, 10=0.42%, 50=0.06% 00:30:40.073 cpu : usr=93.68%, sys=5.94%, ctx=7, majf=0, minf=0 00:30:40.073 IO depths : 1=0.1%, 2=0.9%, 4=66.3%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 issued rwts: total=13627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.073 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:40.073 filename1: (groupid=0, jobs=1): err= 0: pid=103766: Wed May 15 01:33:15 2024 00:30:40.073 read: IOPS=2660, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:30:40.073 slat (nsec): min=5839, max=46864, avg=8846.71, stdev=3233.94 00:30:40.073 clat (usec): min=1704, max=45352, avg=2985.42, stdev=1106.52 00:30:40.073 lat (usec): min=1710, max=45369, avg=2994.26, stdev=1106.49 00:30:40.073 clat percentiles (usec): 00:30:40.073 | 1.00th=[ 2089], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2671], 00:30:40.073 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:30:40.073 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3654], 00:30:40.073 | 99.00th=[ 4015], 99.50th=[ 4113], 99.90th=[ 4555], 99.95th=[45351], 00:30:40.073 | 99.99th=[45351] 00:30:40.073 bw ( KiB/s): min=19414, max=21856, per=24.63%, avg=21268.22, stdev=725.15, samples=9 00:30:40.073 iops : min= 2426, max= 2732, avg=2658.44, stdev=90.88, samples=9 00:30:40.073 lat (msec) : 2=0.48%, 4=98.53%, 10=0.93%, 50=0.06% 00:30:40.073 cpu : usr=93.74%, sys=5.92%, ctx=5, majf=0, minf=9 00:30:40.073 IO depths : 1=0.1%, 2=0.9%, 4=65.9%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 issued rwts: total=13306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.073 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:40.073 filename1: (groupid=0, jobs=1): err= 0: pid=103767: Wed May 15 01:33:15 2024 00:30:40.073 read: IOPS=2745, BW=21.4MiB/s (22.5MB/s)(107MiB/5003msec) 00:30:40.073 slat (nsec): min=5855, max=54098, avg=8800.34, stdev=3140.73 00:30:40.073 clat (usec): min=1103, max=5711, avg=2892.48, stdev=419.02 00:30:40.073 lat (usec): min=1109, max=5742, avg=2901.28, stdev=419.07 00:30:40.073 clat percentiles (usec): 00:30:40.073 | 1.00th=[ 1729], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2606], 00:30:40.073 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:30:40.073 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3359], 95.00th=[ 3589], 00:30:40.073 | 99.00th=[ 3949], 99.50th=[ 4080], 99.90th=[ 4424], 99.95th=[ 5342], 00:30:40.073 | 99.99th=[ 5669] 00:30:40.073 bw ( KiB/s): min=21584, max=22912, per=25.46%, avg=21984.00, stdev=422.11, samples=9 00:30:40.073 iops : min= 2698, max= 2864, avg=2748.00, stdev=52.76, samples=9 00:30:40.073 lat (msec) : 2=2.07%, 4=97.24%, 10=0.69% 00:30:40.073 cpu : usr=93.56%, sys=6.12%, ctx=5, majf=0, minf=9 00:30:40.073 IO depths : 1=0.1%, 2=0.8%, 4=65.7%, 8=33.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:40.073 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 complete : 0=0.0%, 4=96.7%, 8=3.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:40.073 issued rwts: total=13734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:40.073 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:40.073 00:30:40.073 Run status group 0 (all jobs): 00:30:40.073 READ: bw=84.3MiB/s (88.4MB/s), 20.8MiB/s-21.4MiB/s (21.8MB/s-22.5MB/s), io=422MiB (442MB), run=5001-5003msec 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.073 00:30:40.073 real 0m24.313s 00:30:40.073 user 4m54.806s 00:30:40.073 sys 0m9.192s 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 ************************************ 00:30:40.073 END TEST fio_dif_rand_params 00:30:40.073 ************************************ 00:30:40.073 01:33:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:40.073 01:33:15 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:40.073 01:33:15 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 ************************************ 00:30:40.073 START TEST fio_dif_digest 00:30:40.073 ************************************ 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:40.073 bdev_null0 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:40.073 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:40.074 [2024-05-15 01:33:15.511277] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:40.074 { 00:30:40.074 "params": { 00:30:40.074 "name": "Nvme$subsystem", 00:30:40.074 "trtype": "$TEST_TRANSPORT", 00:30:40.074 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:40.074 "adrfam": "ipv4", 00:30:40.074 "trsvcid": "$NVMF_PORT", 00:30:40.074 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:40.074 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:40.074 "hdgst": ${hdgst:-false}, 00:30:40.074 "ddgst": ${ddgst:-false} 00:30:40.074 }, 00:30:40.074 "method": "bdev_nvme_attach_controller" 00:30:40.074 } 00:30:40.074 EOF 00:30:40.074 )") 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:40.074 "params": { 00:30:40.074 "name": "Nvme0", 00:30:40.074 "trtype": "tcp", 00:30:40.074 "traddr": "10.0.0.2", 00:30:40.074 "adrfam": "ipv4", 00:30:40.074 "trsvcid": "4420", 00:30:40.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:40.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:40.074 "hdgst": true, 00:30:40.074 "ddgst": true 00:30:40.074 }, 00:30:40.074 "method": "bdev_nvme_attach_controller" 00:30:40.074 }' 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:40.074 01:33:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:40.331 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:40.331 ... 00:30:40.331 fio-3.35 00:30:40.331 Starting 3 threads 00:30:40.331 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.565 00:30:52.565 filename0: (groupid=0, jobs=1): err= 0: pid=104905: Wed May 15 01:33:26 2024 00:30:52.565 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(404MiB/10044msec) 00:30:52.565 slat (nsec): min=6201, max=27676, avg=10872.76, stdev=2045.48 00:30:52.565 clat (usec): min=4980, max=58328, avg=9294.53, stdev=3704.21 00:30:52.565 lat (usec): min=4987, max=58353, avg=9305.41, stdev=3704.34 00:30:52.565 clat percentiles (usec): 00:30:52.565 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7308], 20.00th=[ 7832], 00:30:52.565 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9372], 00:30:52.565 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11076], 00:30:52.565 | 99.00th=[12911], 99.50th=[51119], 99.90th=[55313], 99.95th=[57934], 00:30:52.565 | 99.99th=[58459] 00:30:52.565 bw ( KiB/s): min=33536, max=46080, per=42.29%, avg=41356.80, stdev=2831.85, samples=20 00:30:52.565 iops : min= 262, max= 360, avg=323.10, stdev=22.12, samples=20 00:30:52.565 lat (msec) : 10=77.51%, 20=21.87%, 50=0.09%, 100=0.53% 00:30:52.565 cpu : usr=91.25%, sys=8.38%, ctx=14, majf=0, minf=38 00:30:52.565 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.565 issued rwts: total=3233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:52.565 filename0: (groupid=0, jobs=1): err= 0: pid=104906: Wed May 15 01:33:26 2024 00:30:52.565 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(384MiB/10045msec) 00:30:52.565 slat (nsec): min=6219, max=28579, avg=10957.18, stdev=1984.81 00:30:52.565 clat (usec): min=6293, max=58938, avg=9795.03, stdev=4661.06 00:30:52.565 lat (usec): min=6304, max=58966, avg=9805.99, stdev=4661.17 00:30:52.565 clat percentiles (usec): 00:30:52.565 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8291], 00:30:52.565 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:30:52.565 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11338], 00:30:52.565 | 99.00th=[50070], 99.50th=[53740], 99.90th=[56361], 99.95th=[58983], 00:30:52.565 | 99.99th=[58983] 00:30:52.565 bw ( KiB/s): min=29184, max=43008, per=40.13%, avg=39244.80, stdev=4078.36, samples=20 00:30:52.565 iops : min= 228, max= 336, avg=306.60, stdev=31.86, samples=20 00:30:52.565 lat (msec) : 10=71.71%, 20=27.25%, 50=0.03%, 100=1.01% 00:30:52.565 cpu : usr=90.63%, sys=9.01%, ctx=14, majf=0, minf=182 00:30:52.565 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.565 issued rwts: total=3068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:52.565 filename0: (groupid=0, jobs=1): err= 0: pid=104907: Wed May 15 01:33:26 2024 00:30:52.565 read: IOPS=136, BW=17.1MiB/s (17.9MB/s)(172MiB/10036msec) 00:30:52.565 slat (nsec): min=6190, max=25362, avg=11303.11, stdev=1574.60 00:30:52.565 clat (msec): min=10, max=103, avg=21.91, stdev=15.42 00:30:52.565 lat (msec): min=10, max=103, avg=21.92, stdev=15.42 00:30:52.565 clat percentiles (msec): 00:30:52.565 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 15], 00:30:52.565 | 30.00th=[ 16], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 17], 00:30:52.565 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 57], 95.00th=[ 59], 00:30:52.565 | 99.00th=[ 63], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 104], 00:30:52.565 | 99.99th=[ 104] 00:30:52.565 bw ( KiB/s): min=12288, max=22528, per=17.93%, avg=17536.00, stdev=2793.87, samples=20 00:30:52.565 iops : min= 96, max= 176, avg=137.00, stdev=21.83, samples=20 00:30:52.565 lat (msec) : 20=84.20%, 50=2.48%, 100=13.18%, 250=0.15% 00:30:52.565 cpu : usr=92.58%, sys=7.13%, ctx=13, majf=0, minf=65 00:30:52.565 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:52.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:52.565 issued rwts: total=1373,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:52.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:52.565 00:30:52.565 Run status group 0 (all jobs): 00:30:52.565 READ: bw=95.5MiB/s (100MB/s), 17.1MiB/s-40.2MiB/s (17.9MB/s-42.2MB/s), io=959MiB (1006MB), run=10036-10045msec 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.565 00:30:52.565 real 0m11.237s 00:30:52.565 user 0m36.981s 00:30:52.565 sys 0m2.804s 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:52.565 01:33:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:52.565 ************************************ 00:30:52.565 END TEST fio_dif_digest 00:30:52.565 ************************************ 00:30:52.565 01:33:26 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:52.565 01:33:26 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:52.565 rmmod nvme_tcp 00:30:52.565 rmmod nvme_fabrics 00:30:52.565 rmmod nvme_keyring 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 95461 ']' 00:30:52.565 01:33:26 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 95461 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 95461 ']' 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 95461 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95461 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95461' 00:30:52.565 killing process with pid 95461 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@965 -- # kill 95461 00:30:52.565 [2024-05-15 01:33:26.890827] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:52.565 01:33:26 nvmf_dif -- common/autotest_common.sh@970 -- # wait 95461 00:30:52.566 01:33:27 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:52.566 01:33:27 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:54.470 Waiting for block devices as requested 00:30:54.470 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:54.470 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:54.470 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:54.470 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:54.729 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:54.729 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:54.729 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:54.989 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:54.989 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:54.989 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:55.249 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:55.249 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:55.249 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:55.508 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:55.508 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:55.508 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:55.768 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:55.768 01:33:31 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:55.768 01:33:31 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:55.768 01:33:31 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:55.768 01:33:31 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:55.768 01:33:31 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.768 01:33:31 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:55.768 01:33:31 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.304 01:33:33 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:58.304 00:30:58.304 real 1m15.475s 00:30:58.304 user 7m16.204s 00:30:58.304 sys 0m29.207s 00:30:58.304 01:33:33 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:58.304 01:33:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.304 ************************************ 00:30:58.304 END TEST nvmf_dif 00:30:58.304 ************************************ 00:30:58.304 01:33:33 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:58.304 01:33:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:30:58.304 01:33:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:58.304 01:33:33 -- common/autotest_common.sh@10 -- # set +x 00:30:58.304 ************************************ 00:30:58.305 START TEST nvmf_abort_qd_sizes 00:30:58.305 ************************************ 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:58.305 * Looking for test storage... 00:30:58.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:58.305 01:33:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:04.873 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:04.873 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:04.873 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:04.874 Found net devices under 0000:af:00.0: cvl_0_0 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:04.874 Found net devices under 0000:af:00.1: cvl_0_1 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.874 01:33:39 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:04.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:31:04.874 00:31:04.874 --- 10.0.0.2 ping statistics --- 00:31:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.874 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:31:04.874 00:31:04.874 --- 10.0.0.1 ping statistics --- 00:31:04.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.874 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:04.874 01:33:40 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:08.160 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:08.160 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:08.161 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:08.161 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:08.161 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:08.161 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:09.535 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=113194 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 113194 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 113194 ']' 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:09.535 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:09.535 [2024-05-15 01:33:45.144863] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:31:09.535 [2024-05-15 01:33:45.144908] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:09.535 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.535 [2024-05-15 01:33:45.218799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:09.794 [2024-05-15 01:33:45.297700] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:09.794 [2024-05-15 01:33:45.297741] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:09.794 [2024-05-15 01:33:45.297752] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:09.794 [2024-05-15 01:33:45.297760] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:09.794 [2024-05-15 01:33:45.297767] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:09.794 [2024-05-15 01:33:45.297819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:09.794 [2024-05-15 01:33:45.297920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:09.794 [2024-05-15 01:33:45.298042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:09.794 [2024-05-15 01:33:45.298044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.361 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:10.361 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:31:10.361 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:10.361 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:10.361 01:33:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:10.361 01:33:45 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:10.362 01:33:45 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.362 01:33:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:10.622 ************************************ 00:31:10.622 START TEST spdk_target_abort 00:31:10.622 ************************************ 00:31:10.622 01:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:31:10.622 01:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:10.622 01:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:31:10.622 01:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:10.622 01:33:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.218 spdk_targetn1 00:31:13.218 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.218 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:13.218 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.218 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.218 [2024-05-15 01:33:48.905720] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.477 [2024-05-15 01:33:48.941764] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:13.477 [2024-05-15 01:33:48.942003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:13.477 01:33:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:13.477 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.766 Initializing NVMe Controllers 00:31:16.766 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:16.766 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:16.766 Initialization complete. Launching workers. 00:31:16.766 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6003, failed: 0 00:31:16.766 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1515, failed to submit 4488 00:31:16.766 success 907, unsuccess 608, failed 0 00:31:16.766 01:33:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:16.766 01:33:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:16.766 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.053 Initializing NVMe Controllers 00:31:20.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:20.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:20.053 Initialization complete. Launching workers. 00:31:20.053 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8576, failed: 0 00:31:20.053 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1237, failed to submit 7339 00:31:20.053 success 311, unsuccess 926, failed 0 00:31:20.053 01:33:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:20.053 01:33:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:20.053 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.344 Initializing NVMe Controllers 00:31:23.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:23.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:23.344 Initialization complete. Launching workers. 00:31:23.344 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35348, failed: 0 00:31:23.344 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2793, failed to submit 32555 00:31:23.344 success 692, unsuccess 2101, failed 0 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.344 01:33:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 113194 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 113194 ']' 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 113194 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 113194 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 113194' 00:31:25.250 killing process with pid 113194 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 113194 00:31:25.250 [2024-05-15 01:34:00.826711] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:25.250 01:34:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 113194 00:31:25.510 00:31:25.510 real 0m14.971s 00:31:25.510 user 0m59.097s 00:31:25.510 sys 0m2.856s 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:25.510 ************************************ 00:31:25.510 END TEST spdk_target_abort 00:31:25.510 ************************************ 00:31:25.510 01:34:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:31:25.510 01:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:25.510 01:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:25.510 01:34:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:25.510 ************************************ 00:31:25.510 START TEST kernel_target_abort 00:31:25.510 ************************************ 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:25.510 01:34:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:28.799 Waiting for block devices as requested 00:31:28.799 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:28.799 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:29.059 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:29.059 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:29.059 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:29.319 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:29.319 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:29.319 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:29.578 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:29.578 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:29.838 No valid GPT data, bailing 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:31:29.838 00:31:29.838 Discovery Log Number of Records 2, Generation counter 2 00:31:29.838 =====Discovery Log Entry 0====== 00:31:29.838 trtype: tcp 00:31:29.838 adrfam: ipv4 00:31:29.838 subtype: current discovery subsystem 00:31:29.838 treq: not specified, sq flow control disable supported 00:31:29.838 portid: 1 00:31:29.838 trsvcid: 4420 00:31:29.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:29.838 traddr: 10.0.0.1 00:31:29.838 eflags: none 00:31:29.838 sectype: none 00:31:29.838 =====Discovery Log Entry 1====== 00:31:29.838 trtype: tcp 00:31:29.838 adrfam: ipv4 00:31:29.838 subtype: nvme subsystem 00:31:29.838 treq: not specified, sq flow control disable supported 00:31:29.838 portid: 1 00:31:29.838 trsvcid: 4420 00:31:29.838 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:29.838 traddr: 10.0.0.1 00:31:29.838 eflags: none 00:31:29.838 sectype: none 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:29.838 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:29.839 01:34:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:29.839 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.167 Initializing NVMe Controllers 00:31:33.167 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:33.167 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:33.167 Initialization complete. Launching workers. 00:31:33.167 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58609, failed: 0 00:31:33.167 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 58609, failed to submit 0 00:31:33.167 success 0, unsuccess 58609, failed 0 00:31:33.167 01:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:33.167 01:34:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:33.167 EAL: No free 2048 kB hugepages reported on node 1 00:31:36.453 Initializing NVMe Controllers 00:31:36.453 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:36.453 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:36.453 Initialization complete. Launching workers. 00:31:36.453 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 106788, failed: 0 00:31:36.453 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26834, failed to submit 79954 00:31:36.453 success 0, unsuccess 26834, failed 0 00:31:36.453 01:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:36.453 01:34:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:36.453 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.743 Initializing NVMe Controllers 00:31:39.743 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:39.743 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:39.743 Initialization complete. Launching workers. 00:31:39.743 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105367, failed: 0 00:31:39.743 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26342, failed to submit 79025 00:31:39.743 success 0, unsuccess 26342, failed 0 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:39.743 01:34:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:42.280 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:42.280 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:42.541 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:42.541 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:42.541 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:42.541 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:43.922 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:44.181 00:31:44.181 real 0m18.514s 00:31:44.181 user 0m6.665s 00:31:44.181 sys 0m5.917s 00:31:44.181 01:34:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:44.181 01:34:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:44.181 ************************************ 00:31:44.181 END TEST kernel_target_abort 00:31:44.181 ************************************ 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:44.181 rmmod nvme_tcp 00:31:44.181 rmmod nvme_fabrics 00:31:44.181 rmmod nvme_keyring 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 113194 ']' 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 113194 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 113194 ']' 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 113194 00:31:44.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (113194) - No such process 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 113194 is not found' 00:31:44.181 Process with pid 113194 is not found 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:44.181 01:34:19 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:47.472 Waiting for block devices as requested 00:31:47.472 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:47.472 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:47.472 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:47.472 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:47.472 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:47.472 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:47.731 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:47.731 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:47.731 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:47.990 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:47.990 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:47.990 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:47.990 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:48.248 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:48.248 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:48.248 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:48.507 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:48.507 01:34:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.683 01:34:26 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:50.683 00:31:50.683 real 0m52.696s 00:31:50.683 user 1m10.180s 00:31:50.683 sys 0m18.737s 00:31:50.683 01:34:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:50.683 01:34:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:50.683 ************************************ 00:31:50.683 END TEST nvmf_abort_qd_sizes 00:31:50.683 ************************************ 00:31:50.683 01:34:26 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:50.683 01:34:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:31:50.683 01:34:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:50.683 01:34:26 -- common/autotest_common.sh@10 -- # set +x 00:31:50.683 ************************************ 00:31:50.683 START TEST keyring_file 00:31:50.683 ************************************ 00:31:50.683 01:34:26 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:50.942 * Looking for test storage... 00:31:50.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.942 01:34:26 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.942 01:34:26 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.942 01:34:26 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.942 01:34:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.942 01:34:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.942 01:34:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.942 01:34:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:50.942 01:34:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eKRdA1jjEc 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eKRdA1jjEc 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eKRdA1jjEc 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eKRdA1jjEc 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.E2jZbYTbE2 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:50.942 01:34:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.E2jZbYTbE2 00:31:50.942 01:34:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.E2jZbYTbE2 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.E2jZbYTbE2 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=122639 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:50.942 01:34:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 122639 00:31:50.942 01:34:26 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 122639 ']' 00:31:50.942 01:34:26 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.942 01:34:26 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:50.942 01:34:26 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.942 01:34:26 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:50.942 01:34:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:51.200 [2024-05-15 01:34:26.645398] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:31:51.200 [2024-05-15 01:34:26.645455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122639 ] 00:31:51.200 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.200 [2024-05-15 01:34:26.713622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:51.200 [2024-05-15 01:34:26.787460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.767 01:34:27 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:51.767 01:34:27 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:51.767 01:34:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:51.767 01:34:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.767 01:34:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:51.767 [2024-05-15 01:34:27.439151] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.038 null0 00:31:52.038 [2024-05-15 01:34:27.471198] nvmf_rpc.c: 614:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:31:52.038 [2024-05-15 01:34:27.471243] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:52.038 [2024-05-15 01:34:27.471511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:52.038 [2024-05-15 01:34:27.479227] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.038 01:34:27 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:52.038 [2024-05-15 01:34:27.495264] nvmf_rpc.c: 772:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:52.038 request: 00:31:52.038 { 00:31:52.038 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:52.038 "secure_channel": false, 00:31:52.038 "listen_address": { 00:31:52.038 "trtype": "tcp", 00:31:52.038 "traddr": "127.0.0.1", 00:31:52.038 "trsvcid": "4420" 00:31:52.038 }, 00:31:52.038 "method": "nvmf_subsystem_add_listener", 00:31:52.038 "req_id": 1 00:31:52.038 } 00:31:52.038 Got JSON-RPC error response 00:31:52.038 response: 00:31:52.038 { 00:31:52.038 "code": -32602, 00:31:52.038 "message": "Invalid parameters" 00:31:52.038 } 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:52.038 01:34:27 keyring_file -- keyring/file.sh@46 -- # bperfpid=122678 00:31:52.038 01:34:27 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:52.038 01:34:27 keyring_file -- keyring/file.sh@48 -- # waitforlisten 122678 /var/tmp/bperf.sock 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 122678 ']' 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:52.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:52.038 01:34:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:52.038 [2024-05-15 01:34:27.548932] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:31:52.038 [2024-05-15 01:34:27.548976] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122678 ] 00:31:52.038 EAL: No free 2048 kB hugepages reported on node 1 00:31:52.038 [2024-05-15 01:34:27.616262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.038 [2024-05-15 01:34:27.690757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.017 01:34:28 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:53.017 01:34:28 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:31:53.017 01:34:28 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:53.017 01:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:53.017 01:34:28 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.E2jZbYTbE2 00:31:53.017 01:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.E2jZbYTbE2 00:31:53.017 01:34:28 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:53.017 01:34:28 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:53.017 01:34:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.017 01:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.017 01:34:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:53.276 01:34:28 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.eKRdA1jjEc == \/\t\m\p\/\t\m\p\.\e\K\R\d\A\1\j\j\E\c ]] 00:31:53.276 01:34:28 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:53.276 01:34:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:53.276 01:34:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.276 01:34:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.276 01:34:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:53.535 01:34:29 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.E2jZbYTbE2 == \/\t\m\p\/\t\m\p\.\E\2\j\Z\b\Y\T\b\E\2 ]] 00:31:53.535 01:34:29 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:53.535 01:34:29 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:53.535 01:34:29 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:53.535 01:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:53.795 01:34:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:53.795 01:34:29 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:53.795 01:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:54.054 [2024-05-15 01:34:29.523736] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:54.054 nvme0n1 00:31:54.054 01:34:29 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:54.054 01:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:54.054 01:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:54.054 01:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:54.054 01:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:54.054 01:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:54.313 01:34:29 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:54.313 01:34:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:54.313 01:34:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:54.313 01:34:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:54.313 01:34:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:54.313 01:34:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:54.313 01:34:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:54.313 01:34:29 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:54.313 01:34:29 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:54.571 Running I/O for 1 seconds... 00:31:55.508 00:31:55.508 Latency(us) 00:31:55.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.508 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:55.508 nvme0n1 : 1.01 9171.89 35.83 0.00 0.00 13881.17 8493.47 23488.10 00:31:55.508 =================================================================================================================== 00:31:55.508 Total : 9171.89 35.83 0.00 0.00 13881.17 8493.47 23488.10 00:31:55.508 0 00:31:55.508 01:34:31 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:55.508 01:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:55.767 01:34:31 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:55.767 01:34:31 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:55.767 01:34:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:55.767 01:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.025 01:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.026 01:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.026 01:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:56.026 01:34:31 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:56.026 01:34:31 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:56.026 01:34:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:56.026 01:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:56.284 [2024-05-15 01:34:31.780968] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:56.284 [2024-05-15 01:34:31.781814] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e0e0 (107): Transport endpoint is not connected 00:31:56.284 [2024-05-15 01:34:31.782810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e0e0 (9): Bad file descriptor 00:31:56.284 [2024-05-15 01:34:31.783810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:56.284 [2024-05-15 01:34:31.783821] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:56.284 [2024-05-15 01:34:31.783830] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:56.284 request: 00:31:56.284 { 00:31:56.284 "name": "nvme0", 00:31:56.284 "trtype": "tcp", 00:31:56.284 "traddr": "127.0.0.1", 00:31:56.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.284 "adrfam": "ipv4", 00:31:56.284 "trsvcid": "4420", 00:31:56.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.284 "psk": "key1", 00:31:56.284 "method": "bdev_nvme_attach_controller", 00:31:56.284 "req_id": 1 00:31:56.284 } 00:31:56.284 Got JSON-RPC error response 00:31:56.284 response: 00:31:56.284 { 00:31:56.284 "code": -32602, 00:31:56.284 "message": "Invalid parameters" 00:31:56.284 } 00:31:56.284 01:34:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:56.284 01:34:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:56.284 01:34:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:56.284 01:34:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:56.284 01:34:31 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:56.284 01:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:56.284 01:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.284 01:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.284 01:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:56.284 01:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.284 01:34:31 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:56.543 01:34:31 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:56.543 01:34:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:56.543 01:34:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:56.543 01:34:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:56.543 01:34:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:56.543 01:34:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:56.543 01:34:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:56.543 01:34:32 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:56.544 01:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:56.803 01:34:32 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:56.803 01:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:57.062 01:34:32 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:57.062 01:34:32 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:57.062 01:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.062 01:34:32 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:57.062 01:34:32 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.eKRdA1jjEc 00:31:57.062 01:34:32 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.062 01:34:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:57.062 01:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:57.321 [2024-05-15 01:34:32.823839] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eKRdA1jjEc': 0100660 00:31:57.321 [2024-05-15 01:34:32.823864] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:57.321 request: 00:31:57.321 { 00:31:57.321 "name": "key0", 00:31:57.321 "path": "/tmp/tmp.eKRdA1jjEc", 00:31:57.321 "method": "keyring_file_add_key", 00:31:57.321 "req_id": 1 00:31:57.321 } 00:31:57.321 Got JSON-RPC error response 00:31:57.321 response: 00:31:57.321 { 00:31:57.321 "code": -1, 00:31:57.321 "message": "Operation not permitted" 00:31:57.321 } 00:31:57.321 01:34:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:57.321 01:34:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:57.321 01:34:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:57.321 01:34:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:57.321 01:34:32 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.eKRdA1jjEc 00:31:57.321 01:34:32 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:57.321 01:34:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eKRdA1jjEc 00:31:57.580 01:34:33 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.eKRdA1jjEc 00:31:57.580 01:34:33 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:57.580 01:34:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:57.580 01:34:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:57.580 01:34:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:57.580 01:34:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:57.580 01:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:57.580 01:34:33 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:57.580 01:34:33 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:57.580 01:34:33 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:57.580 01:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:57.839 [2024-05-15 01:34:33.365271] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eKRdA1jjEc': No such file or directory 00:31:57.839 [2024-05-15 01:34:33.365295] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:57.839 [2024-05-15 01:34:33.365317] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:57.839 [2024-05-15 01:34:33.365325] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:57.839 [2024-05-15 01:34:33.365334] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:57.839 request: 00:31:57.839 { 00:31:57.839 "name": "nvme0", 00:31:57.839 "trtype": "tcp", 00:31:57.839 "traddr": "127.0.0.1", 00:31:57.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:57.839 "adrfam": "ipv4", 00:31:57.839 "trsvcid": "4420", 00:31:57.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:57.839 "psk": "key0", 00:31:57.839 "method": "bdev_nvme_attach_controller", 00:31:57.839 "req_id": 1 00:31:57.839 } 00:31:57.839 Got JSON-RPC error response 00:31:57.839 response: 00:31:57.839 { 00:31:57.839 "code": -19, 00:31:57.839 "message": "No such device" 00:31:57.839 } 00:31:57.839 01:34:33 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:31:57.839 01:34:33 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:57.839 01:34:33 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:57.839 01:34:33 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:57.839 01:34:33 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:57.839 01:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:58.098 01:34:33 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cyX30D8Ow8 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:58.098 01:34:33 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:58.098 01:34:33 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:58.098 01:34:33 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:58.098 01:34:33 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:58.098 01:34:33 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:58.098 01:34:33 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cyX30D8Ow8 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cyX30D8Ow8 00:31:58.098 01:34:33 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.cyX30D8Ow8 00:31:58.098 01:34:33 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cyX30D8Ow8 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cyX30D8Ow8 00:31:58.098 01:34:33 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:58.098 01:34:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:58.357 nvme0n1 00:31:58.357 01:34:34 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:58.357 01:34:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:58.357 01:34:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.357 01:34:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.357 01:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.357 01:34:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.616 01:34:34 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:58.616 01:34:34 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:58.616 01:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:58.875 01:34:34 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:58.875 01:34:34 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:58.875 01:34:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:58.875 01:34:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:58.875 01:34:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:59.135 01:34:34 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:59.135 01:34:34 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:59.135 01:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:59.394 01:34:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:59.394 01:34:34 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:59.394 01:34:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:59.394 01:34:35 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:59.394 01:34:35 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cyX30D8Ow8 00:31:59.394 01:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cyX30D8Ow8 00:31:59.654 01:34:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.E2jZbYTbE2 00:31:59.654 01:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.E2jZbYTbE2 00:31:59.913 01:34:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:59.913 01:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:00.173 nvme0n1 00:32:00.173 01:34:35 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:00.173 01:34:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:00.173 01:34:35 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:00.173 "subsystems": [ 00:32:00.173 { 00:32:00.173 "subsystem": "keyring", 00:32:00.173 "config": [ 00:32:00.173 { 00:32:00.173 "method": "keyring_file_add_key", 00:32:00.173 "params": { 00:32:00.173 "name": "key0", 00:32:00.173 "path": "/tmp/tmp.cyX30D8Ow8" 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "keyring_file_add_key", 00:32:00.173 "params": { 00:32:00.173 "name": "key1", 00:32:00.173 "path": "/tmp/tmp.E2jZbYTbE2" 00:32:00.173 } 00:32:00.173 } 00:32:00.173 ] 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "subsystem": "iobuf", 00:32:00.173 "config": [ 00:32:00.173 { 00:32:00.173 "method": "iobuf_set_options", 00:32:00.173 "params": { 00:32:00.173 "small_pool_count": 8192, 00:32:00.173 "large_pool_count": 1024, 00:32:00.173 "small_bufsize": 8192, 00:32:00.173 "large_bufsize": 135168 00:32:00.173 } 00:32:00.173 } 00:32:00.173 ] 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "subsystem": "sock", 00:32:00.173 "config": [ 00:32:00.173 { 00:32:00.173 "method": "sock_impl_set_options", 00:32:00.173 "params": { 00:32:00.173 "impl_name": "posix", 00:32:00.173 "recv_buf_size": 2097152, 00:32:00.173 "send_buf_size": 2097152, 00:32:00.173 "enable_recv_pipe": true, 00:32:00.173 "enable_quickack": false, 00:32:00.173 "enable_placement_id": 0, 00:32:00.173 "enable_zerocopy_send_server": true, 00:32:00.173 "enable_zerocopy_send_client": false, 00:32:00.173 "zerocopy_threshold": 0, 00:32:00.173 "tls_version": 0, 00:32:00.173 "enable_ktls": false 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "sock_impl_set_options", 00:32:00.173 "params": { 00:32:00.173 "impl_name": "ssl", 00:32:00.173 "recv_buf_size": 4096, 00:32:00.173 "send_buf_size": 4096, 00:32:00.173 "enable_recv_pipe": true, 00:32:00.173 "enable_quickack": false, 00:32:00.173 "enable_placement_id": 0, 00:32:00.173 "enable_zerocopy_send_server": true, 00:32:00.173 "enable_zerocopy_send_client": false, 00:32:00.173 "zerocopy_threshold": 0, 00:32:00.173 "tls_version": 0, 00:32:00.173 "enable_ktls": false 00:32:00.173 } 00:32:00.173 } 00:32:00.173 ] 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "subsystem": "vmd", 00:32:00.173 "config": [] 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "subsystem": "accel", 00:32:00.173 "config": [ 00:32:00.173 { 00:32:00.173 "method": "accel_set_options", 00:32:00.173 "params": { 00:32:00.173 "small_cache_size": 128, 00:32:00.173 "large_cache_size": 16, 00:32:00.173 "task_count": 2048, 00:32:00.173 "sequence_count": 2048, 00:32:00.173 "buf_count": 2048 00:32:00.173 } 00:32:00.173 } 00:32:00.173 ] 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "subsystem": "bdev", 00:32:00.173 "config": [ 00:32:00.173 { 00:32:00.173 "method": "bdev_set_options", 00:32:00.173 "params": { 00:32:00.173 "bdev_io_pool_size": 65535, 00:32:00.173 "bdev_io_cache_size": 256, 00:32:00.173 "bdev_auto_examine": true, 00:32:00.173 "iobuf_small_cache_size": 128, 00:32:00.173 "iobuf_large_cache_size": 16 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "bdev_raid_set_options", 00:32:00.173 "params": { 00:32:00.173 "process_window_size_kb": 1024 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "bdev_iscsi_set_options", 00:32:00.173 "params": { 00:32:00.173 "timeout_sec": 30 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "bdev_nvme_set_options", 00:32:00.173 "params": { 00:32:00.173 "action_on_timeout": "none", 00:32:00.173 "timeout_us": 0, 00:32:00.173 "timeout_admin_us": 0, 00:32:00.173 "keep_alive_timeout_ms": 10000, 00:32:00.173 "arbitration_burst": 0, 00:32:00.173 "low_priority_weight": 0, 00:32:00.173 "medium_priority_weight": 0, 00:32:00.173 "high_priority_weight": 0, 00:32:00.173 "nvme_adminq_poll_period_us": 10000, 00:32:00.173 "nvme_ioq_poll_period_us": 0, 00:32:00.173 "io_queue_requests": 512, 00:32:00.173 "delay_cmd_submit": true, 00:32:00.173 "transport_retry_count": 4, 00:32:00.173 "bdev_retry_count": 3, 00:32:00.173 "transport_ack_timeout": 0, 00:32:00.173 "ctrlr_loss_timeout_sec": 0, 00:32:00.173 "reconnect_delay_sec": 0, 00:32:00.173 "fast_io_fail_timeout_sec": 0, 00:32:00.173 "disable_auto_failback": false, 00:32:00.173 "generate_uuids": false, 00:32:00.173 "transport_tos": 0, 00:32:00.173 "nvme_error_stat": false, 00:32:00.173 "rdma_srq_size": 0, 00:32:00.173 "io_path_stat": false, 00:32:00.173 "allow_accel_sequence": false, 00:32:00.173 "rdma_max_cq_size": 0, 00:32:00.173 "rdma_cm_event_timeout_ms": 0, 00:32:00.173 "dhchap_digests": [ 00:32:00.173 "sha256", 00:32:00.173 "sha384", 00:32:00.173 "sha512" 00:32:00.173 ], 00:32:00.173 "dhchap_dhgroups": [ 00:32:00.173 "null", 00:32:00.173 "ffdhe2048", 00:32:00.173 "ffdhe3072", 00:32:00.173 "ffdhe4096", 00:32:00.173 "ffdhe6144", 00:32:00.173 "ffdhe8192" 00:32:00.173 ] 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "bdev_nvme_attach_controller", 00:32:00.173 "params": { 00:32:00.173 "name": "nvme0", 00:32:00.173 "trtype": "TCP", 00:32:00.173 "adrfam": "IPv4", 00:32:00.173 "traddr": "127.0.0.1", 00:32:00.173 "trsvcid": "4420", 00:32:00.173 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.173 "prchk_reftag": false, 00:32:00.173 "prchk_guard": false, 00:32:00.173 "ctrlr_loss_timeout_sec": 0, 00:32:00.173 "reconnect_delay_sec": 0, 00:32:00.173 "fast_io_fail_timeout_sec": 0, 00:32:00.173 "psk": "key0", 00:32:00.173 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.173 "hdgst": false, 00:32:00.173 "ddgst": false 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "bdev_nvme_set_hotplug", 00:32:00.173 "params": { 00:32:00.173 "period_us": 100000, 00:32:00.173 "enable": false 00:32:00.173 } 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "method": "bdev_wait_for_examine" 00:32:00.173 } 00:32:00.173 ] 00:32:00.173 }, 00:32:00.173 { 00:32:00.173 "subsystem": "nbd", 00:32:00.173 "config": [] 00:32:00.173 } 00:32:00.173 ] 00:32:00.173 }' 00:32:00.173 01:34:35 keyring_file -- keyring/file.sh@114 -- # killprocess 122678 00:32:00.173 01:34:35 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 122678 ']' 00:32:00.173 01:34:35 keyring_file -- common/autotest_common.sh@950 -- # kill -0 122678 00:32:00.173 01:34:35 keyring_file -- common/autotest_common.sh@951 -- # uname 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 122678 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 122678' 00:32:00.433 killing process with pid 122678 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@965 -- # kill 122678 00:32:00.433 Received shutdown signal, test time was about 1.000000 seconds 00:32:00.433 00:32:00.433 Latency(us) 00:32:00.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.433 =================================================================================================================== 00:32:00.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:00.433 01:34:35 keyring_file -- common/autotest_common.sh@970 -- # wait 122678 00:32:00.433 01:34:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=124253 00:32:00.433 01:34:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 124253 /var/tmp/bperf.sock 00:32:00.433 01:34:36 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 124253 ']' 00:32:00.433 01:34:36 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:00.433 01:34:36 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:00.433 01:34:36 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:00.433 01:34:36 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:00.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:00.433 01:34:36 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:00.433 01:34:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:00.433 "subsystems": [ 00:32:00.433 { 00:32:00.433 "subsystem": "keyring", 00:32:00.433 "config": [ 00:32:00.433 { 00:32:00.433 "method": "keyring_file_add_key", 00:32:00.433 "params": { 00:32:00.433 "name": "key0", 00:32:00.433 "path": "/tmp/tmp.cyX30D8Ow8" 00:32:00.433 } 00:32:00.433 }, 00:32:00.433 { 00:32:00.433 "method": "keyring_file_add_key", 00:32:00.433 "params": { 00:32:00.433 "name": "key1", 00:32:00.433 "path": "/tmp/tmp.E2jZbYTbE2" 00:32:00.433 } 00:32:00.433 } 00:32:00.433 ] 00:32:00.433 }, 00:32:00.433 { 00:32:00.433 "subsystem": "iobuf", 00:32:00.433 "config": [ 00:32:00.433 { 00:32:00.433 "method": "iobuf_set_options", 00:32:00.433 "params": { 00:32:00.433 "small_pool_count": 8192, 00:32:00.433 "large_pool_count": 1024, 00:32:00.433 "small_bufsize": 8192, 00:32:00.433 "large_bufsize": 135168 00:32:00.433 } 00:32:00.433 } 00:32:00.433 ] 00:32:00.433 }, 00:32:00.433 { 00:32:00.433 "subsystem": "sock", 00:32:00.433 "config": [ 00:32:00.433 { 00:32:00.433 "method": "sock_impl_set_options", 00:32:00.433 "params": { 00:32:00.433 "impl_name": "posix", 00:32:00.433 "recv_buf_size": 2097152, 00:32:00.433 "send_buf_size": 2097152, 00:32:00.433 "enable_recv_pipe": true, 00:32:00.433 "enable_quickack": false, 00:32:00.433 "enable_placement_id": 0, 00:32:00.433 "enable_zerocopy_send_server": true, 00:32:00.433 "enable_zerocopy_send_client": false, 00:32:00.433 "zerocopy_threshold": 0, 00:32:00.433 "tls_version": 0, 00:32:00.433 "enable_ktls": false 00:32:00.433 } 00:32:00.433 }, 00:32:00.433 { 00:32:00.433 "method": "sock_impl_set_options", 00:32:00.433 "params": { 00:32:00.433 "impl_name": "ssl", 00:32:00.433 "recv_buf_size": 4096, 00:32:00.433 "send_buf_size": 4096, 00:32:00.433 "enable_recv_pipe": true, 00:32:00.433 "enable_quickack": false, 00:32:00.433 "enable_placement_id": 0, 00:32:00.433 "enable_zerocopy_send_server": true, 00:32:00.433 "enable_zerocopy_send_client": false, 00:32:00.433 "zerocopy_threshold": 0, 00:32:00.433 "tls_version": 0, 00:32:00.433 "enable_ktls": false 00:32:00.433 } 00:32:00.433 } 00:32:00.433 ] 00:32:00.433 }, 00:32:00.433 { 00:32:00.433 "subsystem": "vmd", 00:32:00.433 "config": [] 00:32:00.433 }, 00:32:00.433 { 00:32:00.433 "subsystem": "accel", 00:32:00.433 "config": [ 00:32:00.433 { 00:32:00.433 "method": "accel_set_options", 00:32:00.433 "params": { 00:32:00.433 "small_cache_size": 128, 00:32:00.433 "large_cache_size": 16, 00:32:00.433 "task_count": 2048, 00:32:00.433 "sequence_count": 2048, 00:32:00.433 "buf_count": 2048 00:32:00.433 } 00:32:00.434 } 00:32:00.434 ] 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "subsystem": "bdev", 00:32:00.434 "config": [ 00:32:00.434 { 00:32:00.434 "method": "bdev_set_options", 00:32:00.434 "params": { 00:32:00.434 "bdev_io_pool_size": 65535, 00:32:00.434 "bdev_io_cache_size": 256, 00:32:00.434 "bdev_auto_examine": true, 00:32:00.434 "iobuf_small_cache_size": 128, 00:32:00.434 "iobuf_large_cache_size": 16 00:32:00.434 } 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "method": "bdev_raid_set_options", 00:32:00.434 "params": { 00:32:00.434 "process_window_size_kb": 1024 00:32:00.434 } 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "method": "bdev_iscsi_set_options", 00:32:00.434 "params": { 00:32:00.434 "timeout_sec": 30 00:32:00.434 } 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "method": "bdev_nvme_set_options", 00:32:00.434 "params": { 00:32:00.434 "action_on_timeout": "none", 00:32:00.434 "timeout_us": 0, 00:32:00.434 "timeout_admin_us": 0, 00:32:00.434 "keep_alive_timeout_ms": 10000, 00:32:00.434 "arbitration_burst": 0, 00:32:00.434 "low_priority_weight": 0, 00:32:00.434 "medium_priority_weight": 0, 00:32:00.434 "high_priority_weight": 0, 00:32:00.434 "nvme_adminq_poll_period_us": 10000, 00:32:00.434 "nvme_ioq_poll_period_us": 0, 00:32:00.434 "io_queue_requests": 512, 00:32:00.434 "delay_cmd_submit": true, 00:32:00.434 "transport_retry_count": 4, 00:32:00.434 "bdev_retry_count": 3, 00:32:00.434 "transport_ack_timeout": 0, 00:32:00.434 "ctrlr_loss_timeout_sec": 0, 00:32:00.434 "reconnect_delay_sec": 0, 00:32:00.434 "fast_io_fail_timeout_sec": 0, 00:32:00.434 "disable_auto_failback": false, 00:32:00.434 "generate_uuids": false, 00:32:00.434 "transport_tos": 0, 00:32:00.434 "nvme_error_stat": false, 00:32:00.434 "rdma_srq_size": 0, 00:32:00.434 "io_path_stat": false, 00:32:00.434 "allow_accel_sequence": false, 00:32:00.434 "rdma_max_cq_size": 0, 00:32:00.434 "rdma_cm_event_timeout_ms": 0, 00:32:00.434 "dhchap_digests": [ 00:32:00.434 "sha256", 00:32:00.434 "sha384", 00:32:00.434 "sha512" 00:32:00.434 ], 00:32:00.434 "dhchap_dhgroups": [ 00:32:00.434 "null", 00:32:00.434 "ffdhe2048", 00:32:00.434 "ffdhe3072", 00:32:00.434 "ffdhe4096", 00:32:00.434 "ffdhe6144", 00:32:00.434 "ffdhe8192" 00:32:00.434 ] 00:32:00.434 } 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "method": "bdev_nvme_attach_controller", 00:32:00.434 "params": { 00:32:00.434 "name": "nvme0", 00:32:00.434 "trtype": "TCP", 00:32:00.434 "adrfam": "IPv4", 00:32:00.434 "traddr": "127.0.0.1", 00:32:00.434 "trsvcid": "4420", 00:32:00.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:00.434 "prchk_reftag": false, 00:32:00.434 "prchk_guard": false, 00:32:00.434 "ctrlr_loss_timeout_sec": 0, 00:32:00.434 "reconnect_delay_sec": 0, 00:32:00.434 "fast_io_fail_timeout_sec": 0, 00:32:00.434 "psk": "key0", 00:32:00.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:00.434 "hdgst": false, 00:32:00.434 "ddgst": false 00:32:00.434 } 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "method": "bdev_nvme_set_hotplug", 00:32:00.434 "params": { 00:32:00.434 "period_us": 100000, 00:32:00.434 "enable": false 00:32:00.434 } 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "method": "bdev_wait_for_examine" 00:32:00.434 } 00:32:00.434 ] 00:32:00.434 }, 00:32:00.434 { 00:32:00.434 "subsystem": "nbd", 00:32:00.434 "config": [] 00:32:00.434 } 00:32:00.434 ] 00:32:00.434 }' 00:32:00.434 01:34:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:00.693 [2024-05-15 01:34:36.162689] Starting SPDK v24.05-pre git sha1 aa13730db / DPDK 23.11.0 initialization... 00:32:00.693 [2024-05-15 01:34:36.162746] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124253 ] 00:32:00.693 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.693 [2024-05-15 01:34:36.231406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.693 [2024-05-15 01:34:36.302505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.952 [2024-05-15 01:34:36.453570] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:01.520 01:34:36 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:01.520 01:34:36 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:32:01.520 01:34:36 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:01.520 01:34:36 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:01.520 01:34:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.520 01:34:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:01.520 01:34:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:01.520 01:34:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:01.520 01:34:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.520 01:34:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.520 01:34:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:01.520 01:34:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.779 01:34:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:01.779 01:34:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:01.779 01:34:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:01.779 01:34:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:01.779 01:34:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:01.779 01:34:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:01.779 01:34:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:02.038 01:34:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.cyX30D8Ow8 /tmp/tmp.E2jZbYTbE2 00:32:02.038 01:34:37 keyring_file -- keyring/file.sh@20 -- # killprocess 124253 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 124253 ']' 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 124253 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124253 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124253' 00:32:02.038 killing process with pid 124253 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@965 -- # kill 124253 00:32:02.038 Received shutdown signal, test time was about 1.000000 seconds 00:32:02.038 00:32:02.038 Latency(us) 00:32:02.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.038 =================================================================================================================== 00:32:02.038 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:02.038 01:34:37 keyring_file -- common/autotest_common.sh@970 -- # wait 124253 00:32:02.298 01:34:37 keyring_file -- keyring/file.sh@21 -- # killprocess 122639 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 122639 ']' 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@950 -- # kill -0 122639 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@951 -- # uname 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 122639 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 122639' 00:32:02.298 killing process with pid 122639 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@965 -- # kill 122639 00:32:02.298 [2024-05-15 01:34:37.975227] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:02.298 [2024-05-15 01:34:37.975260] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:02.298 01:34:37 keyring_file -- common/autotest_common.sh@970 -- # wait 122639 00:32:02.878 00:32:02.878 real 0m11.968s 00:32:02.878 user 0m27.295s 00:32:02.878 sys 0m3.312s 00:32:02.878 01:34:38 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:02.878 01:34:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:02.878 ************************************ 00:32:02.878 END TEST keyring_file 00:32:02.878 ************************************ 00:32:02.878 01:34:38 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:32:02.878 01:34:38 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:02.878 01:34:38 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:32:02.878 01:34:38 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:02.878 01:34:38 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:02.878 01:34:38 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:02.878 01:34:38 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:32:02.878 01:34:38 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:32:02.878 01:34:38 -- common/autotest_common.sh@720 -- # xtrace_disable 00:32:02.878 01:34:38 -- common/autotest_common.sh@10 -- # set +x 00:32:02.878 01:34:38 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:32:02.878 01:34:38 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:32:02.878 01:34:38 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:32:02.878 01:34:38 -- common/autotest_common.sh@10 -- # set +x 00:32:09.480 INFO: APP EXITING 00:32:09.480 INFO: killing all VMs 00:32:09.480 INFO: killing vhost app 00:32:09.480 INFO: EXIT DONE 00:32:11.386 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:11.386 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:11.645 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:32:14.935 Cleaning 00:32:14.935 Removing: /var/run/dpdk/spdk0/config 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:14.935 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:14.935 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:14.935 Removing: /var/run/dpdk/spdk1/config 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:14.935 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:14.935 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:14.935 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:14.935 Removing: /var/run/dpdk/spdk2/config 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:14.935 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:14.935 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:14.935 Removing: /var/run/dpdk/spdk3/config 00:32:14.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:14.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:14.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:14.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:14.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:14.935 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:14.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:14.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:14.936 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:14.936 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:14.936 Removing: /var/run/dpdk/spdk4/config 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:14.936 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:14.936 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:14.936 Removing: /dev/shm/bdev_svc_trace.1 00:32:14.936 Removing: /dev/shm/nvmf_trace.0 00:32:14.936 Removing: /dev/shm/spdk_tgt_trace.pid3911460 00:32:14.936 Removing: /var/run/dpdk/spdk0 00:32:14.936 Removing: /var/run/dpdk/spdk1 00:32:14.936 Removing: /var/run/dpdk/spdk2 00:32:14.936 Removing: /var/run/dpdk/spdk3 00:32:14.936 Removing: /var/run/dpdk/spdk4 00:32:14.936 Removing: /var/run/dpdk/spdk_pid100992 00:32:14.936 Removing: /var/run/dpdk/spdk_pid103564 00:32:14.936 Removing: /var/run/dpdk/spdk_pid104781 00:32:14.936 Removing: /var/run/dpdk/spdk_pid113963 00:32:14.936 Removing: /var/run/dpdk/spdk_pid114491 00:32:14.936 Removing: /var/run/dpdk/spdk_pid115020 00:32:14.936 Removing: /var/run/dpdk/spdk_pid117483 00:32:14.936 Removing: /var/run/dpdk/spdk_pid118019 00:32:14.936 Removing: /var/run/dpdk/spdk_pid118511 00:32:14.936 Removing: /var/run/dpdk/spdk_pid122639 00:32:14.936 Removing: /var/run/dpdk/spdk_pid122678 00:32:14.936 Removing: /var/run/dpdk/spdk_pid124253 00:32:14.936 Removing: /var/run/dpdk/spdk_pid12658 00:32:15.195 Removing: /var/run/dpdk/spdk_pid16135 00:32:15.195 Removing: /var/run/dpdk/spdk_pid1881 00:32:15.195 Removing: /var/run/dpdk/spdk_pid2145 00:32:15.195 Removing: /var/run/dpdk/spdk_pid21740 00:32:15.195 Removing: /var/run/dpdk/spdk_pid2360 00:32:15.195 Removing: /var/run/dpdk/spdk_pid2701 00:32:15.195 Removing: /var/run/dpdk/spdk_pid2720 00:32:15.195 Removing: /var/run/dpdk/spdk_pid27488 00:32:15.195 Removing: /var/run/dpdk/spdk_pid36447 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3908987 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3910246 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3911460 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3912172 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3913177 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3913393 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3914393 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3914659 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3914860 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3916509 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3917950 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3918268 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3918588 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3918931 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3919255 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3919547 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3919827 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3920135 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3920996 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3924147 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3924452 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3924754 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3924916 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3925444 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3925597 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3926166 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3926320 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3926726 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3926746 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3927041 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3927196 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3927679 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3927961 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3928286 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3928586 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3928619 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3928881 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3929117 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3929374 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3929607 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3929856 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3930117 00:32:15.195 Removing: /var/run/dpdk/spdk_pid3930400 00:32:15.454 Removing: /var/run/dpdk/spdk_pid3930679 00:32:15.454 Removing: /var/run/dpdk/spdk_pid3930964 00:32:15.454 Removing: /var/run/dpdk/spdk_pid3931337 00:32:15.454 Removing: /var/run/dpdk/spdk_pid3931663 00:32:15.454 Removing: /var/run/dpdk/spdk_pid3932007 00:32:15.454 Removing: /var/run/dpdk/spdk_pid3932614 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3932951 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3933232 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3933511 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3933803 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3934085 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3934379 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3934639 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3934932 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3935014 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3935365 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3939414 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3986739 00:32:15.455 Removing: /var/run/dpdk/spdk_pid3991254 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4001666 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4007281 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4011781 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4012526 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4024566 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4024574 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4025623 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4026432 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4027284 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4028149 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4028155 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4028570 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4028719 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4028851 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4029919 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4030725 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4031784 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4032319 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4032326 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4032596 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4033839 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4034990 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4043610 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4043996 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4048435 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4054559 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4057292 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4068079 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4078112 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4079968 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4081022 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4098625 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4102776 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4127512 00:32:15.455 Removing: /var/run/dpdk/spdk_pid4132254 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4133956 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4135982 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4136185 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4136306 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4136553 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4137171 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4139247 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4140128 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4140700 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4143082 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4143693 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4144526 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4148816 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4160129 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4164254 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4170552 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4172034 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4173548 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4178160 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4182635 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4190552 00:32:15.714 Removing: /var/run/dpdk/spdk_pid4190639 00:32:15.714 Removing: /var/run/dpdk/spdk_pid43984 00:32:15.714 Removing: /var/run/dpdk/spdk_pid44003 00:32:15.714 Removing: /var/run/dpdk/spdk_pid63703 00:32:15.714 Removing: /var/run/dpdk/spdk_pid64300 00:32:15.714 Removing: /var/run/dpdk/spdk_pid64913 00:32:15.714 Removing: /var/run/dpdk/spdk_pid65652 00:32:15.714 Removing: /var/run/dpdk/spdk_pid66511 00:32:15.714 Removing: /var/run/dpdk/spdk_pid67243 00:32:15.714 Removing: /var/run/dpdk/spdk_pid67879 00:32:15.714 Removing: /var/run/dpdk/spdk_pid68566 00:32:15.714 Removing: /var/run/dpdk/spdk_pid72966 00:32:15.714 Removing: /var/run/dpdk/spdk_pid73230 00:32:15.714 Removing: /var/run/dpdk/spdk_pid7628 00:32:15.714 Removing: /var/run/dpdk/spdk_pid79593 00:32:15.714 Removing: /var/run/dpdk/spdk_pid79813 00:32:15.714 Removing: /var/run/dpdk/spdk_pid8078 00:32:15.714 Removing: /var/run/dpdk/spdk_pid82127 00:32:15.714 Removing: /var/run/dpdk/spdk_pid90271 00:32:15.714 Removing: /var/run/dpdk/spdk_pid90416 00:32:15.714 Removing: /var/run/dpdk/spdk_pid95766 00:32:15.714 Removing: /var/run/dpdk/spdk_pid97772 00:32:15.714 Removing: /var/run/dpdk/spdk_pid99787 00:32:15.714 Clean 00:32:15.973 01:34:51 -- common/autotest_common.sh@1447 -- # return 0 00:32:15.973 01:34:51 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:32:15.973 01:34:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.973 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:15.973 01:34:51 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:32:15.973 01:34:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.973 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:32:15.973 01:34:51 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:15.973 01:34:51 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:15.973 01:34:51 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:15.973 01:34:51 -- spdk/autotest.sh@387 -- # hash lcov 00:32:15.973 01:34:51 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:15.973 01:34:51 -- spdk/autotest.sh@389 -- # hostname 00:32:15.973 01:34:51 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:16.232 geninfo: WARNING: invalid characters removed from testname! 00:32:38.162 01:35:11 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:38.421 01:35:13 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:40.325 01:35:15 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:41.712 01:35:17 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:43.673 01:35:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:45.051 01:35:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:46.957 01:35:22 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:46.957 01:35:22 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:46.957 01:35:22 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:46.957 01:35:22 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:46.957 01:35:22 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:46.957 01:35:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.957 01:35:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.957 01:35:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.957 01:35:22 -- paths/export.sh@5 -- $ export PATH 00:32:46.957 01:35:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:46.957 01:35:22 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:32:46.957 01:35:22 -- common/autobuild_common.sh@437 -- $ date +%s 00:32:46.957 01:35:22 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715729722.XXXXXX 00:32:46.957 01:35:22 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715729722.ganOJ6 00:32:46.957 01:35:22 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:32:46.957 01:35:22 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:32:46.957 01:35:22 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:32:46.957 01:35:22 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:32:46.957 01:35:22 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:32:46.957 01:35:22 -- common/autobuild_common.sh@453 -- $ get_config_params 00:32:46.957 01:35:22 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:32:46.957 01:35:22 -- common/autotest_common.sh@10 -- $ set +x 00:32:46.957 01:35:22 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:32:46.957 01:35:22 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:32:46.957 01:35:22 -- pm/common@17 -- $ local monitor 00:32:46.957 01:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:46.957 01:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:46.957 01:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:46.957 01:35:22 -- pm/common@21 -- $ date +%s 00:32:46.957 01:35:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:46.957 01:35:22 -- pm/common@21 -- $ date +%s 00:32:46.957 01:35:22 -- pm/common@25 -- $ sleep 1 00:32:46.957 01:35:22 -- pm/common@21 -- $ date +%s 00:32:46.957 01:35:22 -- pm/common@21 -- $ date +%s 00:32:46.957 01:35:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715729722 00:32:46.957 01:35:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715729722 00:32:46.957 01:35:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715729722 00:32:46.957 01:35:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715729722 00:32:46.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715729722_collect-vmstat.pm.log 00:32:46.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715729722_collect-cpu-load.pm.log 00:32:46.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715729722_collect-cpu-temp.pm.log 00:32:46.957 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715729722_collect-bmc-pm.bmc.pm.log 00:32:47.893 01:35:23 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:32:47.893 01:35:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:32:47.893 01:35:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:47.893 01:35:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:47.893 01:35:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:32:47.893 01:35:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:32:47.893 01:35:23 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:47.893 01:35:23 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:32:47.893 01:35:23 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:47.893 01:35:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:32:47.893 01:35:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:32:47.893 01:35:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:32:47.893 01:35:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:32:47.893 01:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:47.893 01:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:32:47.893 01:35:23 -- pm/common@44 -- $ pid=137834 00:32:47.893 01:35:23 -- pm/common@50 -- $ kill -TERM 137834 00:32:47.893 01:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:47.893 01:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:32:47.893 01:35:23 -- pm/common@44 -- $ pid=137836 00:32:47.893 01:35:23 -- pm/common@50 -- $ kill -TERM 137836 00:32:47.893 01:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:47.894 01:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:32:47.894 01:35:23 -- pm/common@44 -- $ pid=137838 00:32:47.894 01:35:23 -- pm/common@50 -- $ kill -TERM 137838 00:32:47.894 01:35:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:32:47.894 01:35:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:32:47.894 01:35:23 -- pm/common@44 -- $ pid=137861 00:32:47.894 01:35:23 -- pm/common@50 -- $ sudo -E kill -TERM 137861 00:32:47.894 + [[ -n 3800676 ]] 00:32:47.894 + sudo kill 3800676 00:32:47.903 [Pipeline] } 00:32:47.925 [Pipeline] // stage 00:32:47.932 [Pipeline] } 00:32:47.969 [Pipeline] // timeout 00:32:47.977 [Pipeline] } 00:32:47.990 [Pipeline] // catchError 00:32:47.993 [Pipeline] } 00:32:48.003 [Pipeline] // wrap 00:32:48.007 [Pipeline] } 00:32:48.016 [Pipeline] // catchError 00:32:48.023 [Pipeline] stage 00:32:48.025 [Pipeline] { (Epilogue) 00:32:48.034 [Pipeline] catchError 00:32:48.035 [Pipeline] { 00:32:48.044 [Pipeline] echo 00:32:48.045 Cleanup processes 00:32:48.048 [Pipeline] sh 00:32:48.327 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:48.327 137936 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:32:48.327 138283 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:48.342 [Pipeline] sh 00:32:48.626 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:32:48.626 ++ grep -v 'sudo pgrep' 00:32:48.626 ++ awk '{print $1}' 00:32:48.626 + sudo kill -9 137936 00:32:48.638 [Pipeline] sh 00:32:48.918 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:48.918 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:54.187 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:32:57.520 [Pipeline] sh 00:32:57.804 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:57.804 Artifacts sizes are good 00:32:57.818 [Pipeline] archiveArtifacts 00:32:57.824 Archiving artifacts 00:32:57.960 [Pipeline] sh 00:32:58.234 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:32:58.249 [Pipeline] cleanWs 00:32:58.259 [WS-CLEANUP] Deleting project workspace... 00:32:58.259 [WS-CLEANUP] Deferred wipeout is used... 00:32:58.265 [WS-CLEANUP] done 00:32:58.266 [Pipeline] } 00:32:58.281 [Pipeline] // catchError 00:32:58.291 [Pipeline] sh 00:32:58.571 + logger -p user.info -t JENKINS-CI 00:32:58.579 [Pipeline] } 00:32:58.595 [Pipeline] // stage 00:32:58.601 [Pipeline] } 00:32:58.616 [Pipeline] // node 00:32:58.621 [Pipeline] End of Pipeline 00:32:58.650 Finished: SUCCESS